Dec 03 00:06:15 crc systemd[1]: Starting Kubernetes Kubelet... Dec 03 00:06:15 crc restorecon[4684]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:15 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:06:16 crc restorecon[4684]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 00:06:16 crc restorecon[4684]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 03 00:06:16 crc kubenswrapper[4805]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 00:06:16 crc kubenswrapper[4805]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 03 00:06:16 crc kubenswrapper[4805]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 00:06:16 crc kubenswrapper[4805]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 00:06:16 crc kubenswrapper[4805]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 03 00:06:16 crc kubenswrapper[4805]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.262624 4805 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268392 4805 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268448 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268458 4805 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268470 4805 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268480 4805 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268489 4805 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268497 4805 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268505 4805 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268512 4805 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268520 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268528 4805 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268536 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268543 4805 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268551 4805 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268559 4805 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268566 4805 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268574 4805 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268582 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268593 4805 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268603 4805 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268611 4805 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268620 4805 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268629 4805 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268638 4805 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268663 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268672 4805 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268680 4805 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268688 4805 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268696 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268705 4805 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268713 4805 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268721 4805 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268729 4805 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268737 4805 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268744 4805 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268753 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268761 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268769 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268780 4805 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268793 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268807 4805 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268816 4805 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268825 4805 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268834 4805 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268842 4805 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268851 4805 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268859 4805 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268868 4805 feature_gate.go:330] unrecognized feature gate: Example Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268876 4805 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268884 4805 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268892 4805 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268899 4805 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268907 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268915 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268923 4805 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268931 4805 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268942 4805 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268951 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268959 4805 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268967 4805 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268975 4805 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268983 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.268992 4805 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.269005 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.269028 4805 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.269041 4805 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.269052 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.269062 4805 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.269072 4805 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.269081 4805 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.269090 4805 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269322 4805 flags.go:64] FLAG: --address="0.0.0.0" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269357 4805 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269380 4805 flags.go:64] FLAG: --anonymous-auth="true" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269395 4805 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269408 4805 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269418 4805 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269432 4805 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269443 4805 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269453 4805 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269462 4805 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269472 4805 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269487 4805 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269496 4805 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269505 4805 flags.go:64] FLAG: --cgroup-root="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269514 4805 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269523 4805 flags.go:64] FLAG: --client-ca-file="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269533 4805 flags.go:64] FLAG: --cloud-config="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269543 4805 flags.go:64] FLAG: --cloud-provider="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269552 4805 flags.go:64] FLAG: --cluster-dns="[]" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269563 4805 flags.go:64] FLAG: --cluster-domain="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269572 4805 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269582 4805 flags.go:64] FLAG: --config-dir="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269591 4805 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269601 4805 flags.go:64] FLAG: --container-log-max-files="5" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269613 4805 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269622 4805 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269632 4805 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269641 4805 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269651 4805 flags.go:64] FLAG: --contention-profiling="false" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269659 4805 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269668 4805 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269678 4805 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269687 4805 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269699 4805 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269722 4805 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269738 4805 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269760 4805 flags.go:64] FLAG: --enable-load-reader="false" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269774 4805 flags.go:64] FLAG: --enable-server="true" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269786 4805 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269801 4805 flags.go:64] FLAG: --event-burst="100" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269814 4805 flags.go:64] FLAG: --event-qps="50" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269826 4805 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269838 4805 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269849 4805 flags.go:64] FLAG: --eviction-hard="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269863 4805 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269874 4805 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269884 4805 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269900 4805 flags.go:64] FLAG: --eviction-soft="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269912 4805 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269924 4805 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269936 4805 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269947 4805 flags.go:64] FLAG: --experimental-mounter-path="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269958 4805 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269969 4805 flags.go:64] FLAG: --fail-swap-on="true" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269980 4805 flags.go:64] FLAG: --feature-gates="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.269995 4805 flags.go:64] FLAG: --file-check-frequency="20s" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270007 4805 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270019 4805 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270032 4805 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270044 4805 flags.go:64] FLAG: --healthz-port="10248" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270055 4805 flags.go:64] FLAG: --help="false" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270066 4805 flags.go:64] FLAG: --hostname-override="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270077 4805 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270090 4805 flags.go:64] FLAG: --http-check-frequency="20s" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270101 4805 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270115 4805 flags.go:64] FLAG: --image-credential-provider-config="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270126 4805 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270137 4805 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270148 4805 flags.go:64] FLAG: --image-service-endpoint="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270159 4805 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270169 4805 flags.go:64] FLAG: --kube-api-burst="100" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270181 4805 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270192 4805 flags.go:64] FLAG: --kube-api-qps="50" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270238 4805 flags.go:64] FLAG: --kube-reserved="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270250 4805 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270260 4805 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270271 4805 flags.go:64] FLAG: --kubelet-cgroups="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270283 4805 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270296 4805 flags.go:64] FLAG: --lock-file="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270307 4805 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270318 4805 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270331 4805 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270364 4805 flags.go:64] FLAG: --log-json-split-stream="false" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270377 4805 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270389 4805 flags.go:64] FLAG: --log-text-split-stream="false" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270400 4805 flags.go:64] FLAG: --logging-format="text" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270411 4805 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270423 4805 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270434 4805 flags.go:64] FLAG: --manifest-url="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270446 4805 flags.go:64] FLAG: --manifest-url-header="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270461 4805 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270473 4805 flags.go:64] FLAG: --max-open-files="1000000" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270488 4805 flags.go:64] FLAG: --max-pods="110" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270500 4805 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270512 4805 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270524 4805 flags.go:64] FLAG: --memory-manager-policy="None" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270535 4805 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270546 4805 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270557 4805 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270568 4805 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270595 4805 flags.go:64] FLAG: --node-status-max-images="50" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270606 4805 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270617 4805 flags.go:64] FLAG: --oom-score-adj="-999" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270629 4805 flags.go:64] FLAG: --pod-cidr="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270639 4805 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270658 4805 flags.go:64] FLAG: --pod-manifest-path="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270668 4805 flags.go:64] FLAG: --pod-max-pids="-1" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270680 4805 flags.go:64] FLAG: --pods-per-core="0" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270690 4805 flags.go:64] FLAG: --port="10250" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270703 4805 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270715 4805 flags.go:64] FLAG: --provider-id="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270727 4805 flags.go:64] FLAG: --qos-reserved="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270738 4805 flags.go:64] FLAG: --read-only-port="10255" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270750 4805 flags.go:64] FLAG: --register-node="true" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270763 4805 flags.go:64] FLAG: --register-schedulable="true" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270774 4805 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270796 4805 flags.go:64] FLAG: --registry-burst="10" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270808 4805 flags.go:64] FLAG: --registry-qps="5" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270818 4805 flags.go:64] FLAG: --reserved-cpus="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270831 4805 flags.go:64] FLAG: --reserved-memory="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270858 4805 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270869 4805 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270881 4805 flags.go:64] FLAG: --rotate-certificates="false" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270892 4805 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270903 4805 flags.go:64] FLAG: --runonce="false" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270915 4805 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270927 4805 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270939 4805 flags.go:64] FLAG: --seccomp-default="false" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270950 4805 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270962 4805 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270974 4805 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270986 4805 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.270997 4805 flags.go:64] FLAG: --storage-driver-password="root" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.271008 4805 flags.go:64] FLAG: --storage-driver-secure="false" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.271019 4805 flags.go:64] FLAG: --storage-driver-table="stats" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.271028 4805 flags.go:64] FLAG: --storage-driver-user="root" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.271038 4805 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.271048 4805 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.271057 4805 flags.go:64] FLAG: --system-cgroups="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.271066 4805 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.271081 4805 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.271091 4805 flags.go:64] FLAG: --tls-cert-file="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.271099 4805 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.271112 4805 flags.go:64] FLAG: --tls-min-version="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.271121 4805 flags.go:64] FLAG: --tls-private-key-file="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.271130 4805 flags.go:64] FLAG: --topology-manager-policy="none" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.271142 4805 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.271152 4805 flags.go:64] FLAG: --topology-manager-scope="container" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.271162 4805 flags.go:64] FLAG: --v="2" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.271175 4805 flags.go:64] FLAG: --version="false" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.271188 4805 flags.go:64] FLAG: --vmodule="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.271877 4805 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.271895 4805 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272187 4805 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272228 4805 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272242 4805 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272251 4805 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272259 4805 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272270 4805 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272289 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272307 4805 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272318 4805 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272329 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272375 4805 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272385 4805 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272395 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272407 4805 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272416 4805 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272424 4805 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272431 4805 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272439 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272446 4805 feature_gate.go:330] unrecognized feature gate: Example Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272454 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272462 4805 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272470 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272477 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272485 4805 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272493 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272501 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272508 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272516 4805 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272524 4805 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272532 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272540 4805 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272548 4805 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272555 4805 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272563 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272570 4805 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272631 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272640 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272648 4805 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272659 4805 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272667 4805 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272675 4805 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272683 4805 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272691 4805 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272702 4805 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272713 4805 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272730 4805 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272754 4805 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272766 4805 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272778 4805 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272791 4805 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272803 4805 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272813 4805 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272823 4805 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272830 4805 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272839 4805 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272846 4805 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272857 4805 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272867 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272875 4805 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272885 4805 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272893 4805 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272902 4805 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272910 4805 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272917 4805 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272926 4805 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272933 4805 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272942 4805 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272949 4805 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272957 4805 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272964 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.272972 4805 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.272986 4805 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.282116 4805 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.282151 4805 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282253 4805 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282264 4805 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282270 4805 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282276 4805 feature_gate.go:330] unrecognized feature gate: Example Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282280 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282285 4805 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282292 4805 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282299 4805 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282304 4805 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282310 4805 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282316 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282320 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282325 4805 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282332 4805 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282338 4805 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282344 4805 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282349 4805 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282355 4805 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282359 4805 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282365 4805 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282370 4805 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282375 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282380 4805 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282385 4805 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282391 4805 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282396 4805 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282401 4805 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282407 4805 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282411 4805 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282415 4805 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282421 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282426 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282431 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282436 4805 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282448 4805 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282452 4805 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282457 4805 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282462 4805 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282467 4805 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282472 4805 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282478 4805 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282484 4805 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282489 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282495 4805 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282500 4805 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282505 4805 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282509 4805 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282515 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282520 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282525 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282530 4805 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282534 4805 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282539 4805 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282544 4805 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282549 4805 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282554 4805 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282558 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282563 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282568 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282574 4805 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282579 4805 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282586 4805 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282590 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282595 4805 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282600 4805 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282605 4805 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282609 4805 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282614 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282619 4805 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282625 4805 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282632 4805 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.282640 4805 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282819 4805 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282829 4805 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282836 4805 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282844 4805 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282851 4805 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282857 4805 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282863 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282869 4805 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282874 4805 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282881 4805 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282887 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282892 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282898 4805 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282904 4805 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282910 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282915 4805 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282919 4805 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282924 4805 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282929 4805 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282933 4805 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282940 4805 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282948 4805 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282953 4805 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282959 4805 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282965 4805 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282970 4805 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282976 4805 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282982 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282989 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282994 4805 feature_gate.go:330] unrecognized feature gate: Example Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.282999 4805 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283004 4805 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283009 4805 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283014 4805 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283020 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283025 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283030 4805 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283036 4805 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283041 4805 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283045 4805 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283050 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283055 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283062 4805 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283068 4805 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283073 4805 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283078 4805 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283084 4805 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283089 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283095 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283100 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283106 4805 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283323 4805 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283330 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283337 4805 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283343 4805 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283347 4805 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283353 4805 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283358 4805 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283363 4805 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283368 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283373 4805 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283378 4805 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283383 4805 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283388 4805 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283393 4805 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283398 4805 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283403 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283408 4805 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283413 4805 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283418 4805 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.283425 4805 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.283432 4805 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.283651 4805 server.go:940] "Client rotation is on, will bootstrap in background" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.286439 4805 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.286540 4805 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.287055 4805 server.go:997] "Starting client certificate rotation" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.287087 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.287557 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-27 22:42:00.707457264 +0000 UTC Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.287752 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.292319 4805 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.294269 4805 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 00:06:16 crc kubenswrapper[4805]: E1203 00:06:16.294295 4805 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.302176 4805 log.go:25] "Validated CRI v1 runtime API" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.322034 4805 log.go:25] "Validated CRI v1 image API" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.323506 4805 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.327999 4805 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-03-00-01-49-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.328049 4805 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.349286 4805 manager.go:217] Machine: {Timestamp:2025-12-03 00:06:16.347770733 +0000 UTC m=+0.196733369 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:c7baa8da-c025-4f62-9ac6-1cb1b9cf4097 BootID:59e4b4c6-95e9-49e1-956a-d67d8a6ba8db Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ef:c4:84 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ef:c4:84 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:12:8a:89 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:15:91:e6 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:98:0a:9e Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:12:6a:1f Speed:-1 Mtu:1496} {Name:eth10 MacAddress:aa:79:c6:1f:a3:b3 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:12:4c:15:93:33:25 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.349579 4805 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.349895 4805 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.350820 4805 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.351121 4805 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.351181 4805 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.351499 4805 topology_manager.go:138] "Creating topology manager with none policy" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.351514 4805 container_manager_linux.go:303] "Creating device plugin manager" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.351743 4805 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.351784 4805 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.352115 4805 state_mem.go:36] "Initialized new in-memory state store" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.352253 4805 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.352946 4805 kubelet.go:418] "Attempting to sync node with API server" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.352973 4805 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.353006 4805 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.353027 4805 kubelet.go:324] "Adding apiserver pod source" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.353042 4805 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.355123 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 03 00:06:16 crc kubenswrapper[4805]: E1203 00:06:16.355296 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.355531 4805 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.355621 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 03 00:06:16 crc kubenswrapper[4805]: E1203 00:06:16.355734 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.355988 4805 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.356932 4805 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.357605 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.357641 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.357652 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.357672 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.357687 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.357697 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.357706 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.357722 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.357734 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.357746 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.357783 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.357795 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.357835 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.358462 4805 server.go:1280] "Started kubelet" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.358944 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.359005 4805 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.360173 4805 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.360822 4805 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.360948 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.361019 4805 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 03 00:06:16 crc systemd[1]: Started Kubernetes Kubelet. Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.361159 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 22:52:05.813556438 +0000 UTC Dec 03 00:06:16 crc kubenswrapper[4805]: E1203 00:06:16.361405 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.361487 4805 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.361506 4805 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.361697 4805 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.362485 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 03 00:06:16 crc kubenswrapper[4805]: E1203 00:06:16.364069 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 03 00:06:16 crc kubenswrapper[4805]: E1203 00:06:16.364490 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.130:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187d8bd518bcb566 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:06:16.358417766 +0000 UTC m=+0.207380382,LastTimestamp:2025-12-03 00:06:16.358417766 +0000 UTC m=+0.207380382,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.366744 4805 factory.go:55] Registering systemd factory Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.367898 4805 factory.go:221] Registration of the systemd container factory successfully Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.370611 4805 factory.go:153] Registering CRI-O factory Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.370640 4805 factory.go:221] Registration of the crio container factory successfully Dec 03 00:06:16 crc kubenswrapper[4805]: E1203 00:06:16.370570 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="200ms" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.370761 4805 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.370806 4805 factory.go:103] Registering Raw factory Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.370833 4805 manager.go:1196] Started watching for new ooms in manager Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.371476 4805 server.go:460] "Adding debug handlers to kubelet server" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.371614 4805 manager.go:319] Starting recovery of all containers Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377089 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377257 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377282 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377300 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377318 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377335 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377354 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377369 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377393 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377411 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377428 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377445 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377465 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377484 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377500 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377517 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377539 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377559 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377581 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377598 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377615 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377635 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377651 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377684 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377703 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377726 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377748 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377768 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377791 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377810 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377831 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377854 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377875 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377896 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377915 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377937 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377959 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.377981 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378014 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378033 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378052 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378292 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378316 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378336 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378354 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378384 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378404 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378420 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378440 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378459 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378479 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378498 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378528 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378550 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378573 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378593 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378614 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378632 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378653 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378674 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378693 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.378715 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382569 4805 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382640 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382670 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382686 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382701 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382714 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382727 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382741 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382757 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382773 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382788 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382804 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382817 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382832 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382845 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382861 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382874 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382887 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382902 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382917 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382932 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382951 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382964 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382982 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.382997 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383010 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383023 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383036 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383050 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383065 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383079 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383092 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383105 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383119 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383132 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383145 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383158 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383170 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383185 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383234 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383253 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383269 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383284 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383307 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383324 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383341 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383357 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383377 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383394 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383413 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383433 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383455 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383475 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383494 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383511 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383527 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383542 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383557 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383571 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383586 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383600 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383613 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383626 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383641 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383655 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383668 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383683 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383697 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383709 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383721 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383735 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383750 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383763 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383776 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383789 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383801 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383813 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383826 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383841 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383855 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383871 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383885 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383901 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383913 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383925 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383938 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383953 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383967 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383981 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.383996 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384011 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384025 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384038 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384056 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384068 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384293 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384312 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384327 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384373 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384387 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384401 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384415 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384430 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384443 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384456 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384469 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384543 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384558 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384572 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384586 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384598 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384611 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384626 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384638 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384651 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384664 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384676 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384688 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384700 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384719 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384731 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384742 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384753 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384765 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384779 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384792 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384804 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384818 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384830 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384842 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384854 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384866 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384883 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384898 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384912 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384928 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384944 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384959 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.384975 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.385132 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.385277 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.385300 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.385314 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.385329 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.385351 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.385365 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.385394 4805 reconstruct.go:97] "Volume reconstruction finished" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.385404 4805 reconciler.go:26] "Reconciler: start to sync state" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.399872 4805 manager.go:324] Recovery completed Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.412640 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.416504 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.416555 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.416566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.417439 4805 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.417461 4805 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.417480 4805 state_mem.go:36] "Initialized new in-memory state store" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.420038 4805 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.421981 4805 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.422019 4805 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.422044 4805 kubelet.go:2335] "Starting kubelet main sync loop" Dec 03 00:06:16 crc kubenswrapper[4805]: E1203 00:06:16.422084 4805 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.422848 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 03 00:06:16 crc kubenswrapper[4805]: E1203 00:06:16.422911 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.426824 4805 policy_none.go:49] "None policy: Start" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.427754 4805 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.427797 4805 state_mem.go:35] "Initializing new in-memory state store" Dec 03 00:06:16 crc kubenswrapper[4805]: E1203 00:06:16.462214 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.482566 4805 manager.go:334] "Starting Device Plugin manager" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.482640 4805 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.482658 4805 server.go:79] "Starting device plugin registration server" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.483155 4805 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.483177 4805 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.483697 4805 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.483793 4805 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.483810 4805 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 03 00:06:16 crc kubenswrapper[4805]: E1203 00:06:16.492916 4805 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.522686 4805 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.522762 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.524040 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.524079 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.524089 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.524244 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.524436 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.524471 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.525234 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.525270 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.525281 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.525356 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.525419 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.525440 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.525484 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.525564 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.525596 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.526300 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.526323 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.526304 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.526333 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.526345 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.526356 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.526464 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.526634 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.526671 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.527359 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.527379 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.527387 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.527477 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.527798 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.527826 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.528241 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.528262 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.528271 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.528684 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.528717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.528849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.529217 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.529244 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.529258 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.529401 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.529435 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.530227 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.530259 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.530272 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:16 crc kubenswrapper[4805]: E1203 00:06:16.572323 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="400ms" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.584034 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.585791 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.585828 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.585839 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.585865 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 00:06:16 crc kubenswrapper[4805]: E1203 00:06:16.586391 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.590587 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.590659 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.590690 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.590716 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.590771 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.590865 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.590901 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.590967 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.591051 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.591090 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.591116 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.591148 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.591184 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.591273 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.591312 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.692716 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.692770 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.692798 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.692831 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.692861 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.692880 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.692898 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.692917 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.692935 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.692950 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.692949 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.692998 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.692971 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.693080 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.693082 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.693084 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.693108 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.693105 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.693184 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.692969 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.693234 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.693298 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.693371 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.693398 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.693417 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.693459 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.693498 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.693399 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.693522 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.693434 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.787110 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.788414 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.788456 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.788476 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.788502 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 00:06:16 crc kubenswrapper[4805]: E1203 00:06:16.788952 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.849276 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.857272 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.867697 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-bf407c3453752f8ecf323d6f74dd545aac24b6ea0ac0cccdfee09ffd40e2923e WatchSource:0}: Error finding container bf407c3453752f8ecf323d6f74dd545aac24b6ea0ac0cccdfee09ffd40e2923e: Status 404 returned error can't find the container with id bf407c3453752f8ecf323d6f74dd545aac24b6ea0ac0cccdfee09ffd40e2923e Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.871606 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-08af15bfeabd721105e94b70c6eaa481127b0e600c593cc96d70d90757ebd5fa WatchSource:0}: Error finding container 08af15bfeabd721105e94b70c6eaa481127b0e600c593cc96d70d90757ebd5fa: Status 404 returned error can't find the container with id 08af15bfeabd721105e94b70c6eaa481127b0e600c593cc96d70d90757ebd5fa Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.873100 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.888529 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: I1203 00:06:16.892795 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.907361 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-b32f40aff127249abadbd3a68a2ac12471b808a2791b873e176f876f9cfb2b3d WatchSource:0}: Error finding container b32f40aff127249abadbd3a68a2ac12471b808a2791b873e176f876f9cfb2b3d: Status 404 returned error can't find the container with id b32f40aff127249abadbd3a68a2ac12471b808a2791b873e176f876f9cfb2b3d Dec 03 00:06:16 crc kubenswrapper[4805]: W1203 00:06:16.911113 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-afc38fc8be1686c59bbf6f5d20fb800f56eda5e1aa561455e3e59f967f26e9d9 WatchSource:0}: Error finding container afc38fc8be1686c59bbf6f5d20fb800f56eda5e1aa561455e3e59f967f26e9d9: Status 404 returned error can't find the container with id afc38fc8be1686c59bbf6f5d20fb800f56eda5e1aa561455e3e59f967f26e9d9 Dec 03 00:06:16 crc kubenswrapper[4805]: E1203 00:06:16.973421 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="800ms" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.189586 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.191525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.191565 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.191578 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.191602 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 00:06:17 crc kubenswrapper[4805]: E1203 00:06:17.192088 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.359973 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.361944 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 19:04:51.867669042 +0000 UTC Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.362015 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 306h58m34.505656964s for next certificate rotation Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.427500 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a"} Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.427632 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b32f40aff127249abadbd3a68a2ac12471b808a2791b873e176f876f9cfb2b3d"} Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.429035 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f" exitCode=0 Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.429100 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f"} Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.429127 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"58be11bfd616aa18ce3afa85e10dd9ae10c5b7d62f6328782dc65590d8d0c925"} Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.429368 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.430692 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.430723 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.430734 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.432265 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.432259 4805 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5" exitCode=0 Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.432344 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5"} Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.432389 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"08af15bfeabd721105e94b70c6eaa481127b0e600c593cc96d70d90757ebd5fa"} Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.432549 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.432920 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.432942 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.432952 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.433142 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.433166 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.433177 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.434004 4805 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="6a605790bb8690284503c4ff0054b3bbd6f81ad0822fa614d1bb0f7e476c2211" exitCode=0 Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.434087 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"6a605790bb8690284503c4ff0054b3bbd6f81ad0822fa614d1bb0f7e476c2211"} Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.434127 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"bf407c3453752f8ecf323d6f74dd545aac24b6ea0ac0cccdfee09ffd40e2923e"} Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.434262 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.435494 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.435633 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.435729 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.436175 4805 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71" exitCode=0 Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.436232 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71"} Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.436275 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"afc38fc8be1686c59bbf6f5d20fb800f56eda5e1aa561455e3e59f967f26e9d9"} Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.436368 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.437266 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.437293 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.437302 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:17 crc kubenswrapper[4805]: W1203 00:06:17.490286 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 03 00:06:17 crc kubenswrapper[4805]: E1203 00:06:17.490405 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 03 00:06:17 crc kubenswrapper[4805]: W1203 00:06:17.598312 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 03 00:06:17 crc kubenswrapper[4805]: E1203 00:06:17.598408 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 03 00:06:17 crc kubenswrapper[4805]: W1203 00:06:17.649660 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 03 00:06:17 crc kubenswrapper[4805]: E1203 00:06:17.649758 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 03 00:06:17 crc kubenswrapper[4805]: W1203 00:06:17.707325 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 03 00:06:17 crc kubenswrapper[4805]: E1203 00:06:17.707437 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 03 00:06:17 crc kubenswrapper[4805]: E1203 00:06:17.775132 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="1.6s" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.993075 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.994615 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.994653 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.994662 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:17 crc kubenswrapper[4805]: I1203 00:06:17.994707 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.377988 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.439220 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"52b9bd6a2ff9fb5775064416b40e7c898532700f85cd1f6fc749af0fb3618454"} Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.439389 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.440355 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.440385 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.440402 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.444954 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e723695c2c26ea7f587aada1b84da8caf1fba5ebbaf8b5665de0c33f8c0d02ab"} Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.444991 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"15c67b169b1480f166cc6166b80f1371fda8965fcd58b25153a1d3befd74af36"} Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.445008 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"10721a21c16e0eb75e863b08e07a09c628f5e1f6ae17d60c3c65010188d04ddf"} Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.445104 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.445923 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.445952 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.445966 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.449294 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f"} Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.449322 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b"} Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.449337 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e"} Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.449417 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.450041 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.450068 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.450083 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.454020 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2"} Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.454092 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96"} Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.454118 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50"} Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.454130 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa"} Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.454144 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b"} Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.454312 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.455225 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.455263 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.455274 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.457021 4805 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502" exitCode=0 Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.457081 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502"} Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.457207 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.459040 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.459077 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:18 crc kubenswrapper[4805]: I1203 00:06:18.459090 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:19 crc kubenswrapper[4805]: I1203 00:06:19.461504 4805 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e" exitCode=0 Dec 03 00:06:19 crc kubenswrapper[4805]: I1203 00:06:19.461582 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e"} Dec 03 00:06:19 crc kubenswrapper[4805]: I1203 00:06:19.461645 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:19 crc kubenswrapper[4805]: I1203 00:06:19.461774 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:19 crc kubenswrapper[4805]: I1203 00:06:19.462684 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:19 crc kubenswrapper[4805]: I1203 00:06:19.462717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:19 crc kubenswrapper[4805]: I1203 00:06:19.462728 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:19 crc kubenswrapper[4805]: I1203 00:06:19.462764 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:19 crc kubenswrapper[4805]: I1203 00:06:19.462785 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:19 crc kubenswrapper[4805]: I1203 00:06:19.462798 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:19 crc kubenswrapper[4805]: I1203 00:06:19.684701 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:19 crc kubenswrapper[4805]: I1203 00:06:19.684903 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 00:06:19 crc kubenswrapper[4805]: I1203 00:06:19.684952 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:19 crc kubenswrapper[4805]: I1203 00:06:19.686183 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:19 crc kubenswrapper[4805]: I1203 00:06:19.686237 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:19 crc kubenswrapper[4805]: I1203 00:06:19.686252 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:19 crc kubenswrapper[4805]: I1203 00:06:19.843495 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:20 crc kubenswrapper[4805]: I1203 00:06:20.467364 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0"} Dec 03 00:06:20 crc kubenswrapper[4805]: I1203 00:06:20.467427 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085"} Dec 03 00:06:20 crc kubenswrapper[4805]: I1203 00:06:20.467444 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4"} Dec 03 00:06:20 crc kubenswrapper[4805]: I1203 00:06:20.467456 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a"} Dec 03 00:06:20 crc kubenswrapper[4805]: I1203 00:06:20.467467 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7"} Dec 03 00:06:20 crc kubenswrapper[4805]: I1203 00:06:20.467485 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 00:06:20 crc kubenswrapper[4805]: I1203 00:06:20.467563 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:20 crc kubenswrapper[4805]: I1203 00:06:20.467652 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:20 crc kubenswrapper[4805]: I1203 00:06:20.468844 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:20 crc kubenswrapper[4805]: I1203 00:06:20.468872 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:20 crc kubenswrapper[4805]: I1203 00:06:20.468882 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:20 crc kubenswrapper[4805]: I1203 00:06:20.469685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:20 crc kubenswrapper[4805]: I1203 00:06:20.469724 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:20 crc kubenswrapper[4805]: I1203 00:06:20.469733 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:20 crc kubenswrapper[4805]: I1203 00:06:20.869165 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:20 crc kubenswrapper[4805]: I1203 00:06:20.869406 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:20 crc kubenswrapper[4805]: I1203 00:06:20.871000 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:20 crc kubenswrapper[4805]: I1203 00:06:20.871033 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:20 crc kubenswrapper[4805]: I1203 00:06:20.871046 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:20 crc kubenswrapper[4805]: I1203 00:06:20.875980 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:21 crc kubenswrapper[4805]: I1203 00:06:21.470324 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:21 crc kubenswrapper[4805]: I1203 00:06:21.471177 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:21 crc kubenswrapper[4805]: I1203 00:06:21.471244 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:21 crc kubenswrapper[4805]: I1203 00:06:21.471260 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:21 crc kubenswrapper[4805]: I1203 00:06:21.681496 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:22 crc kubenswrapper[4805]: I1203 00:06:22.335605 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:06:22 crc kubenswrapper[4805]: I1203 00:06:22.336140 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:22 crc kubenswrapper[4805]: I1203 00:06:22.337644 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:22 crc kubenswrapper[4805]: I1203 00:06:22.337692 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:22 crc kubenswrapper[4805]: I1203 00:06:22.337703 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:22 crc kubenswrapper[4805]: I1203 00:06:22.472737 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 00:06:22 crc kubenswrapper[4805]: I1203 00:06:22.472804 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:22 crc kubenswrapper[4805]: I1203 00:06:22.474066 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:22 crc kubenswrapper[4805]: I1203 00:06:22.474098 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:22 crc kubenswrapper[4805]: I1203 00:06:22.474110 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:22 crc kubenswrapper[4805]: I1203 00:06:22.846078 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:23 crc kubenswrapper[4805]: I1203 00:06:23.046294 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:23 crc kubenswrapper[4805]: I1203 00:06:23.475741 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:23 crc kubenswrapper[4805]: I1203 00:06:23.476828 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:23 crc kubenswrapper[4805]: I1203 00:06:23.476857 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:23 crc kubenswrapper[4805]: I1203 00:06:23.476866 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:23 crc kubenswrapper[4805]: I1203 00:06:23.810481 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:23 crc kubenswrapper[4805]: I1203 00:06:23.810995 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:23 crc kubenswrapper[4805]: I1203 00:06:23.812349 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:23 crc kubenswrapper[4805]: I1203 00:06:23.812414 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:23 crc kubenswrapper[4805]: I1203 00:06:23.812434 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:24 crc kubenswrapper[4805]: I1203 00:06:24.047309 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 03 00:06:24 crc kubenswrapper[4805]: I1203 00:06:24.047519 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:24 crc kubenswrapper[4805]: I1203 00:06:24.048627 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:24 crc kubenswrapper[4805]: I1203 00:06:24.048660 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:24 crc kubenswrapper[4805]: I1203 00:06:24.048671 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:24 crc kubenswrapper[4805]: I1203 00:06:24.478235 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:24 crc kubenswrapper[4805]: I1203 00:06:24.479303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:24 crc kubenswrapper[4805]: I1203 00:06:24.479335 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:24 crc kubenswrapper[4805]: I1203 00:06:24.479345 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:24 crc kubenswrapper[4805]: I1203 00:06:24.682263 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:06:24 crc kubenswrapper[4805]: I1203 00:06:24.682362 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 00:06:26 crc kubenswrapper[4805]: E1203 00:06:26.493349 4805 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 00:06:27 crc kubenswrapper[4805]: E1203 00:06:27.996135 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 03 00:06:28 crc kubenswrapper[4805]: I1203 00:06:28.028482 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 03 00:06:28 crc kubenswrapper[4805]: I1203 00:06:28.028717 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:28 crc kubenswrapper[4805]: I1203 00:06:28.030324 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:28 crc kubenswrapper[4805]: I1203 00:06:28.030379 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:28 crc kubenswrapper[4805]: I1203 00:06:28.030475 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:28 crc kubenswrapper[4805]: I1203 00:06:28.361469 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 03 00:06:28 crc kubenswrapper[4805]: E1203 00:06:28.380317 4805 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 03 00:06:29 crc kubenswrapper[4805]: I1203 00:06:29.227669 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 03 00:06:29 crc kubenswrapper[4805]: I1203 00:06:29.227733 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 03 00:06:29 crc kubenswrapper[4805]: I1203 00:06:29.235136 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 03 00:06:29 crc kubenswrapper[4805]: I1203 00:06:29.235234 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 03 00:06:29 crc kubenswrapper[4805]: I1203 00:06:29.596390 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:29 crc kubenswrapper[4805]: I1203 00:06:29.597686 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:29 crc kubenswrapper[4805]: I1203 00:06:29.597731 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:29 crc kubenswrapper[4805]: I1203 00:06:29.597744 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:29 crc kubenswrapper[4805]: I1203 00:06:29.597774 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 00:06:29 crc kubenswrapper[4805]: I1203 00:06:29.694011 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]log ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]etcd ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/generic-apiserver-start-informers ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/priority-and-fairness-filter ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/start-apiextensions-informers ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/start-apiextensions-controllers ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/crd-informer-synced ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/start-system-namespaces-controller ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 03 00:06:29 crc kubenswrapper[4805]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 03 00:06:29 crc kubenswrapper[4805]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/bootstrap-controller ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/start-kube-aggregator-informers ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/apiservice-registration-controller ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/apiservice-discovery-controller ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]autoregister-completion ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/apiservice-openapi-controller ok Dec 03 00:06:29 crc kubenswrapper[4805]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 03 00:06:29 crc kubenswrapper[4805]: livez check failed Dec 03 00:06:29 crc kubenswrapper[4805]: I1203 00:06:29.694085 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:06:32 crc kubenswrapper[4805]: I1203 00:06:32.495571 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 03 00:06:32 crc kubenswrapper[4805]: I1203 00:06:32.508694 4805 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 03 00:06:32 crc kubenswrapper[4805]: I1203 00:06:32.852621 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:32 crc kubenswrapper[4805]: I1203 00:06:32.852778 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:32 crc kubenswrapper[4805]: I1203 00:06:32.853969 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:32 crc kubenswrapper[4805]: I1203 00:06:32.854087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:32 crc kubenswrapper[4805]: I1203 00:06:32.854149 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:33 crc kubenswrapper[4805]: I1203 00:06:33.300027 4805 csr.go:261] certificate signing request csr-8dd8r is approved, waiting to be issued Dec 03 00:06:33 crc kubenswrapper[4805]: I1203 00:06:33.337811 4805 csr.go:257] certificate signing request csr-8dd8r is issued Dec 03 00:06:34 crc kubenswrapper[4805]: E1203 00:06:34.223437 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.225691 4805 trace.go:236] Trace[1046410452]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 00:06:19.472) (total time: 14753ms): Dec 03 00:06:34 crc kubenswrapper[4805]: Trace[1046410452]: ---"Objects listed" error: 14753ms (00:06:34.225) Dec 03 00:06:34 crc kubenswrapper[4805]: Trace[1046410452]: [14.753509369s] [14.753509369s] END Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.225720 4805 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.226446 4805 trace.go:236] Trace[225278107]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 00:06:19.654) (total time: 14571ms): Dec 03 00:06:34 crc kubenswrapper[4805]: Trace[225278107]: ---"Objects listed" error: 14571ms (00:06:34.226) Dec 03 00:06:34 crc kubenswrapper[4805]: Trace[225278107]: [14.571805797s] [14.571805797s] END Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.226488 4805 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.302386 4805 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.302435 4805 trace.go:236] Trace[1146260591]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 00:06:20.185) (total time: 14116ms): Dec 03 00:06:34 crc kubenswrapper[4805]: Trace[1146260591]: ---"Objects listed" error: 14116ms (00:06:34.302) Dec 03 00:06:34 crc kubenswrapper[4805]: Trace[1146260591]: [14.116886777s] [14.116886777s] END Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.303028 4805 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.303149 4805 trace.go:236] Trace[2061434221]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 00:06:19.914) (total time: 14388ms): Dec 03 00:06:34 crc kubenswrapper[4805]: Trace[2061434221]: ---"Objects listed" error: 14387ms (00:06:34.302) Dec 03 00:06:34 crc kubenswrapper[4805]: Trace[2061434221]: [14.388121056s] [14.388121056s] END Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.303185 4805 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.339095 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-12-03 00:01:33 +0000 UTC, rotation deadline is 2026-10-17 11:11:49.98305377 +0000 UTC Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.339513 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7643h5m15.643546889s for next certificate rotation Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.341994 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.363257 4805 apiserver.go:52] "Watching apiserver" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.365557 4805 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.365965 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.366444 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.366491 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 00:06:34 crc kubenswrapper[4805]: E1203 00:06:34.366548 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.366460 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:34 crc kubenswrapper[4805]: E1203 00:06:34.366710 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.367702 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.377444 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.378372 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:34 crc kubenswrapper[4805]: E1203 00:06:34.378565 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.378752 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.379837 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.379969 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.380582 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.384904 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.384980 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.385033 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.385060 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.385081 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.385170 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.411816 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.420047 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.439635 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.451397 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.462974 4805 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.463678 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.464104 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.474769 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.485723 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.501098 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.503094 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.503134 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.503167 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.503222 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.503251 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.503328 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.503378 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.503398 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.503419 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.503418 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.503462 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.503542 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.503578 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.503627 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.503656 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.503725 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.503759 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.503789 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.503970 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.504239 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.504310 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.503657 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.503785 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.503795 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.504214 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.504236 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.504347 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.504375 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.504647 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.504679 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.504695 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.504723 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.504738 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.505227 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.505270 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.505345 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.505421 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.505427 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.505718 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.505796 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.505878 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506131 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506020 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506028 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506070 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506225 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506244 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506262 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506281 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506298 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506316 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506333 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506353 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506384 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506421 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506439 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506457 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506473 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506491 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506509 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506526 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506558 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506575 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506591 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506607 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506609 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506630 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506647 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506664 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506680 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506713 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506730 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506747 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506780 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506797 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506815 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506831 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506847 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506850 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506867 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506885 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.506901 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.507055 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.507269 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.507648 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.507701 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.507955 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.508342 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.508393 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.508412 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.508428 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.508448 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.508465 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.508480 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.508733 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.508753 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.508857 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.509086 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.510416 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.510400 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.510790 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.510842 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.510900 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.510940 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.510966 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.510988 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511011 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511038 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511140 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511168 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511181 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511190 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511269 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511295 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511323 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511346 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511369 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511392 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511416 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511441 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511477 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511537 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511567 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511586 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511621 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511646 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511669 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511691 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511713 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511735 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511753 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511772 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511792 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511812 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511832 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511861 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511881 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511900 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511920 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511943 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511969 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.511989 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.512043 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.512047 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.512063 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.512110 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.512224 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.512458 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.512505 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.512562 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.512594 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.512633 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.512671 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.512706 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.512741 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.512784 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.512813 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.512841 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.512877 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.512919 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.512957 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513004 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513048 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513089 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513126 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513160 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513222 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513250 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513283 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513353 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513380 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513412 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513447 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513481 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513510 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513545 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513579 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513609 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513645 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513678 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513708 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513735 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513769 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513800 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513841 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513873 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513902 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513933 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513968 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513999 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.514030 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.514061 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.514096 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.514127 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.517086 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.517156 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.517183 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.517221 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.517247 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.517269 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.517291 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.517337 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.517357 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.517379 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.517429 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.517454 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.519122 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.519519 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.519571 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.519620 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.512707 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.512947 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513131 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513390 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513596 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.513785 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.514048 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.514560 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.514598 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.515019 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.515164 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.515367 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.515633 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.516016 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.516054 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.516473 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.516759 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.516841 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.515178 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.517253 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.516840 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.518075 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.518057 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.518369 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.518511 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.518668 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.519017 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.519057 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.519344 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.519468 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.520011 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.521884 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2" exitCode=255 Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.520185 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.518683 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.520487 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.520526 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.521015 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.521529 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.521788 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.522227 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.522239 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.522366 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.525851 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.526338 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.526402 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.526431 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.526930 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527002 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527042 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527076 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527082 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527158 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527192 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527240 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527266 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527294 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527399 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527436 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527463 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527489 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527516 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527543 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527570 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527594 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527620 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527647 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527673 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527676 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527700 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527729 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527745 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527759 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.527834 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.528285 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.528324 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.528363 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.528053 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.528414 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.528480 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.528546 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.528585 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.528597 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.528818 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.528860 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.528868 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.528888 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.528918 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.528949 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.528959 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529018 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529060 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529115 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529133 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529161 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529165 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529183 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529222 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529247 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529420 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529439 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529457 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529477 4805 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529489 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529502 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529515 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529529 4805 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529543 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529555 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529564 4805 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529573 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529584 4805 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529595 4805 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529604 4805 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529615 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529665 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529675 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529686 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529703 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529713 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529724 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529733 4805 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529782 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529800 4805 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529819 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529829 4805 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529839 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529847 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529859 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529869 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529879 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529888 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529897 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529907 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529917 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529929 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529945 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529957 4805 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529969 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529982 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529994 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530004 4805 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530015 4805 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530025 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530048 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530057 4805 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530070 4805 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530083 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530097 4805 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530109 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530132 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530145 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530156 4805 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530166 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530177 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530187 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530209 4805 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530218 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530228 4805 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530237 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530246 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530255 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530267 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530276 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530286 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530295 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530305 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530313 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530324 4805 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530334 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530344 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530366 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530375 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530384 4805 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530394 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530403 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530412 4805 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530423 4805 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530433 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530444 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530454 4805 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530463 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530472 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530480 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530490 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530500 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530509 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530519 4805 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530528 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530538 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530547 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530556 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530565 4805 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530574 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530584 4805 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530595 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530762 4805 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.532420 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529452 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529544 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.529681 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530729 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530801 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.530992 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: E1203 00:06:34.531192 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:34 crc kubenswrapper[4805]: E1203 00:06:34.533780 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:35.033741697 +0000 UTC m=+18.882704303 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.531302 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.533778 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.531608 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.531742 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.531853 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.531904 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: E1203 00:06:34.531945 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:34 crc kubenswrapper[4805]: E1203 00:06:34.533944 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:35.033934801 +0000 UTC m=+18.882897617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.519796 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: E1203 00:06:34.534460 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:06:35.034437425 +0000 UTC m=+18.883400031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.534587 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.536661 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.536842 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.536844 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.536867 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.537082 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.537385 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.538417 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.538951 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.540321 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.520415 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.520771 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.540698 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.541726 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.541997 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.542384 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.542042 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.542548 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.542697 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.542714 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.542724 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.542991 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.542972 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.543243 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.543374 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.543863 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.545049 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.545781 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.548055 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.548434 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: E1203 00:06:34.548583 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:34 crc kubenswrapper[4805]: E1203 00:06:34.548609 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:34 crc kubenswrapper[4805]: E1203 00:06:34.548623 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:34 crc kubenswrapper[4805]: E1203 00:06:34.548704 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:35.048680624 +0000 UTC m=+18.897643230 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.547511 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.548771 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2"} Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.521895 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.549142 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.549466 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.549705 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.549739 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.549879 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.550173 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.550266 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.550736 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.520929 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.550790 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.551114 4805 scope.go:117] "RemoveContainer" containerID="f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.551585 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.551623 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.551659 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.551587 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.552001 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.552095 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.552144 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.554592 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.556733 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.556792 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.557248 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.557297 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.557595 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.558376 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.558556 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.558644 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.558896 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.558970 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.558919 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.559172 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: E1203 00:06:34.559683 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:34 crc kubenswrapper[4805]: E1203 00:06:34.559714 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:34 crc kubenswrapper[4805]: E1203 00:06:34.559734 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:34 crc kubenswrapper[4805]: E1203 00:06:34.559804 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:35.059776774 +0000 UTC m=+18.908739390 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.562591 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.562712 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.562794 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.563168 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.563542 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.563548 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.564581 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.564769 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.565711 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.567248 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.567425 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.567914 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.571410 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.571451 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.571983 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.572088 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.572322 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.572579 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.572780 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.573757 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.575476 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.583122 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.583964 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.584708 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.585586 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.589129 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.589156 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.589890 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.590658 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.591550 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.595268 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: E1203 00:06:34.602961 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.606688 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.606771 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.609172 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.619080 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.631720 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.632028 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.632226 4805 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.632304 4805 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.632376 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.632432 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.632487 4805 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.632540 4805 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.632619 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.632680 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.632735 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.632788 4805 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.632842 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.632898 4805 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.632958 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.633011 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.633081 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.633145 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.633221 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.633348 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.633443 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.633509 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.633565 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.633622 4805 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.633677 4805 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.633735 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.633795 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.633856 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.633918 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.633971 4805 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.634030 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.634090 4805 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.634148 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.634222 4805 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.634298 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.634363 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.634442 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.634499 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.634551 4805 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.634609 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.634663 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.634719 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.634776 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.634831 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.634886 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.634944 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.632304 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.635001 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.635127 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.635143 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.635157 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.635171 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.635180 4805 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.632259 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.635191 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.635523 4805 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.635597 4805 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.635674 4805 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.635736 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.635810 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.635872 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.635949 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.631930 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636014 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636058 4805 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636069 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636079 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636091 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636101 4805 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636111 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636120 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636129 4805 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636138 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636175 4805 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636185 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636210 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636225 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636236 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636246 4805 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636256 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636266 4805 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636276 4805 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636286 4805 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636295 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636307 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636316 4805 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636334 4805 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636347 4805 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636356 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636366 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636376 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636408 4805 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636420 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636433 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636444 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636456 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636467 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636478 4805 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636489 4805 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636500 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636511 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.636522 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.646638 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.660540 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.672803 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.689930 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.691823 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.694481 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.704071 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.709269 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.710636 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 00:06:34 crc kubenswrapper[4805]: W1203 00:06:34.713051 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-1f487ce8b1ef66c986c489e45f07516def2e45a63ec6c588ad4b82834e07f37e WatchSource:0}: Error finding container 1f487ce8b1ef66c986c489e45f07516def2e45a63ec6c588ad4b82834e07f37e: Status 404 returned error can't find the container with id 1f487ce8b1ef66c986c489e45f07516def2e45a63ec6c588ad4b82834e07f37e Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.723642 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: W1203 00:06:34.723817 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-ae405c4bf63c8abb3a41ca7d2d8d39413d390e62f95ebe228b58e6065d93ee70 WatchSource:0}: Error finding container ae405c4bf63c8abb3a41ca7d2d8d39413d390e62f95ebe228b58e6065d93ee70: Status 404 returned error can't find the container with id ae405c4bf63c8abb3a41ca7d2d8d39413d390e62f95ebe228b58e6065d93ee70 Dec 03 00:06:34 crc kubenswrapper[4805]: W1203 00:06:34.728740 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-3e8fe7a701011f47f72976ee838486e45253250e92b1c7e4e4878073cef1fd89 WatchSource:0}: Error finding container 3e8fe7a701011f47f72976ee838486e45253250e92b1c7e4e4878073cef1fd89: Status 404 returned error can't find the container with id 3e8fe7a701011f47f72976ee838486e45253250e92b1c7e4e4878073cef1fd89 Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.735536 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.754962 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.774326 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.785689 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.800454 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.821406 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.853559 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.868022 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.894002 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.945938 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:34 crc kubenswrapper[4805]: I1203 00:06:34.990055 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.012920 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.025963 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.035327 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.039411 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.039477 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.039502 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:35 crc kubenswrapper[4805]: E1203 00:06:35.039612 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:35 crc kubenswrapper[4805]: E1203 00:06:35.039632 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:35 crc kubenswrapper[4805]: E1203 00:06:35.039668 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:06:36.039624303 +0000 UTC m=+19.888586919 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:06:35 crc kubenswrapper[4805]: E1203 00:06:35.039726 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:36.039712575 +0000 UTC m=+19.888675191 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:35 crc kubenswrapper[4805]: E1203 00:06:35.039753 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:36.039742176 +0000 UTC m=+19.888704792 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.048091 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.137830 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-qh9zq"] Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.138362 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qh9zq" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.140283 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.140420 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:35 crc kubenswrapper[4805]: E1203 00:06:35.140675 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:35 crc kubenswrapper[4805]: E1203 00:06:35.140774 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:35 crc kubenswrapper[4805]: E1203 00:06:35.140851 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:35 crc kubenswrapper[4805]: E1203 00:06:35.140965 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:35 crc kubenswrapper[4805]: E1203 00:06:35.141013 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:35 crc kubenswrapper[4805]: E1203 00:06:35.141031 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:35 crc kubenswrapper[4805]: E1203 00:06:35.140988 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:36.140967864 +0000 UTC m=+19.989930470 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:35 crc kubenswrapper[4805]: E1203 00:06:35.141126 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:36.141095827 +0000 UTC m=+19.990058613 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.149447 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.149691 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.149793 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.151207 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.159989 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.183840 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.241755 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8-serviceca\") pod \"node-ca-qh9zq\" (UID: \"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\") " pod="openshift-image-registry/node-ca-qh9zq" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.241814 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zxz7\" (UniqueName: \"kubernetes.io/projected/5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8-kube-api-access-7zxz7\") pod \"node-ca-qh9zq\" (UID: \"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\") " pod="openshift-image-registry/node-ca-qh9zq" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.241844 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8-host\") pod \"node-ca-qh9zq\" (UID: \"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\") " pod="openshift-image-registry/node-ca-qh9zq" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.244130 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.271212 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.288706 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.305753 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.333173 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.342551 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8-serviceca\") pod \"node-ca-qh9zq\" (UID: \"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\") " pod="openshift-image-registry/node-ca-qh9zq" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.342609 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zxz7\" (UniqueName: \"kubernetes.io/projected/5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8-kube-api-access-7zxz7\") pod \"node-ca-qh9zq\" (UID: \"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\") " pod="openshift-image-registry/node-ca-qh9zq" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.342633 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8-host\") pod \"node-ca-qh9zq\" (UID: \"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\") " pod="openshift-image-registry/node-ca-qh9zq" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.342697 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8-host\") pod \"node-ca-qh9zq\" (UID: \"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\") " pod="openshift-image-registry/node-ca-qh9zq" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.343579 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8-serviceca\") pod \"node-ca-qh9zq\" (UID: \"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\") " pod="openshift-image-registry/node-ca-qh9zq" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.351173 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.365343 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zxz7\" (UniqueName: \"kubernetes.io/projected/5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8-kube-api-access-7zxz7\") pod \"node-ca-qh9zq\" (UID: \"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\") " pod="openshift-image-registry/node-ca-qh9zq" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.367743 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.451692 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qh9zq" Dec 03 00:06:35 crc kubenswrapper[4805]: W1203 00:06:35.467049 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c3c462d_b6c9_48b0_b2b7_a03ae311d7c8.slice/crio-2c006f47ca42a688a86de5d94dd57f4c2a3433562f03cae68491013924a9deb1 WatchSource:0}: Error finding container 2c006f47ca42a688a86de5d94dd57f4c2a3433562f03cae68491013924a9deb1: Status 404 returned error can't find the container with id 2c006f47ca42a688a86de5d94dd57f4c2a3433562f03cae68491013924a9deb1 Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.535769 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.541655 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67"} Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.542523 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.548962 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e"} Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.549001 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1f487ce8b1ef66c986c489e45f07516def2e45a63ec6c588ad4b82834e07f37e"} Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.552030 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qh9zq" event={"ID":"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8","Type":"ContainerStarted","Data":"2c006f47ca42a688a86de5d94dd57f4c2a3433562f03cae68491013924a9deb1"} Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.557667 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3e8fe7a701011f47f72976ee838486e45253250e92b1c7e4e4878073cef1fd89"} Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.560273 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.575345 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c"} Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.575623 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0"} Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.575691 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ae405c4bf63c8abb3a41ca7d2d8d39413d390e62f95ebe228b58e6065d93ee70"} Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.610041 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-29wnh"] Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.610593 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-dd5rs"] Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.610878 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-29wnh" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.611148 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.617070 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.617548 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.622849 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.621076 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.621950 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.622313 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.622820 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.620787 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.624481 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.642936 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.658731 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.683089 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.697056 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.708393 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.722067 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.734839 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.749333 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.749626 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlfx5\" (UniqueName: \"kubernetes.io/projected/42d6da4d-d781-4243-b5c3-28a8cf91ef53-kube-api-access-rlfx5\") pod \"machine-config-daemon-dd5rs\" (UID: \"42d6da4d-d781-4243-b5c3-28a8cf91ef53\") " pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.749688 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/42d6da4d-d781-4243-b5c3-28a8cf91ef53-mcd-auth-proxy-config\") pod \"machine-config-daemon-dd5rs\" (UID: \"42d6da4d-d781-4243-b5c3-28a8cf91ef53\") " pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.750305 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/60be52b4-e3f7-4b20-b854-5521ee573c09-hosts-file\") pod \"node-resolver-29wnh\" (UID: \"60be52b4-e3f7-4b20-b854-5521ee573c09\") " pod="openshift-dns/node-resolver-29wnh" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.750364 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g78bb\" (UniqueName: \"kubernetes.io/projected/60be52b4-e3f7-4b20-b854-5521ee573c09-kube-api-access-g78bb\") pod \"node-resolver-29wnh\" (UID: \"60be52b4-e3f7-4b20-b854-5521ee573c09\") " pod="openshift-dns/node-resolver-29wnh" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.750442 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/42d6da4d-d781-4243-b5c3-28a8cf91ef53-proxy-tls\") pod \"machine-config-daemon-dd5rs\" (UID: \"42d6da4d-d781-4243-b5c3-28a8cf91ef53\") " pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.750779 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/42d6da4d-d781-4243-b5c3-28a8cf91ef53-rootfs\") pod \"machine-config-daemon-dd5rs\" (UID: \"42d6da4d-d781-4243-b5c3-28a8cf91ef53\") " pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.764221 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.778617 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.790150 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.805098 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.818455 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.833644 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.845488 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.851863 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlfx5\" (UniqueName: \"kubernetes.io/projected/42d6da4d-d781-4243-b5c3-28a8cf91ef53-kube-api-access-rlfx5\") pod \"machine-config-daemon-dd5rs\" (UID: \"42d6da4d-d781-4243-b5c3-28a8cf91ef53\") " pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.851913 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/42d6da4d-d781-4243-b5c3-28a8cf91ef53-mcd-auth-proxy-config\") pod \"machine-config-daemon-dd5rs\" (UID: \"42d6da4d-d781-4243-b5c3-28a8cf91ef53\") " pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.851953 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/60be52b4-e3f7-4b20-b854-5521ee573c09-hosts-file\") pod \"node-resolver-29wnh\" (UID: \"60be52b4-e3f7-4b20-b854-5521ee573c09\") " pod="openshift-dns/node-resolver-29wnh" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.851976 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g78bb\" (UniqueName: \"kubernetes.io/projected/60be52b4-e3f7-4b20-b854-5521ee573c09-kube-api-access-g78bb\") pod \"node-resolver-29wnh\" (UID: \"60be52b4-e3f7-4b20-b854-5521ee573c09\") " pod="openshift-dns/node-resolver-29wnh" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.851997 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/42d6da4d-d781-4243-b5c3-28a8cf91ef53-proxy-tls\") pod \"machine-config-daemon-dd5rs\" (UID: \"42d6da4d-d781-4243-b5c3-28a8cf91ef53\") " pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.852022 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/42d6da4d-d781-4243-b5c3-28a8cf91ef53-rootfs\") pod \"machine-config-daemon-dd5rs\" (UID: \"42d6da4d-d781-4243-b5c3-28a8cf91ef53\") " pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.852089 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/42d6da4d-d781-4243-b5c3-28a8cf91ef53-rootfs\") pod \"machine-config-daemon-dd5rs\" (UID: \"42d6da4d-d781-4243-b5c3-28a8cf91ef53\") " pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.852217 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/60be52b4-e3f7-4b20-b854-5521ee573c09-hosts-file\") pod \"node-resolver-29wnh\" (UID: \"60be52b4-e3f7-4b20-b854-5521ee573c09\") " pod="openshift-dns/node-resolver-29wnh" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.852787 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/42d6da4d-d781-4243-b5c3-28a8cf91ef53-mcd-auth-proxy-config\") pod \"machine-config-daemon-dd5rs\" (UID: \"42d6da4d-d781-4243-b5c3-28a8cf91ef53\") " pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.857006 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.860152 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/42d6da4d-d781-4243-b5c3-28a8cf91ef53-proxy-tls\") pod \"machine-config-daemon-dd5rs\" (UID: \"42d6da4d-d781-4243-b5c3-28a8cf91ef53\") " pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.872398 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlfx5\" (UniqueName: \"kubernetes.io/projected/42d6da4d-d781-4243-b5c3-28a8cf91ef53-kube-api-access-rlfx5\") pod \"machine-config-daemon-dd5rs\" (UID: \"42d6da4d-d781-4243-b5c3-28a8cf91ef53\") " pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.881716 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g78bb\" (UniqueName: \"kubernetes.io/projected/60be52b4-e3f7-4b20-b854-5521ee573c09-kube-api-access-g78bb\") pod \"node-resolver-29wnh\" (UID: \"60be52b4-e3f7-4b20-b854-5521ee573c09\") " pod="openshift-dns/node-resolver-29wnh" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.893308 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.915020 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.933190 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-29wnh" Dec 03 00:06:35 crc kubenswrapper[4805]: W1203 00:06:35.946655 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60be52b4_e3f7_4b20_b854_5521ee573c09.slice/crio-949300a10adf212665fbbe3c0a19451a40ff415e29eba0f4396bf513aed293de WatchSource:0}: Error finding container 949300a10adf212665fbbe3c0a19451a40ff415e29eba0f4396bf513aed293de: Status 404 returned error can't find the container with id 949300a10adf212665fbbe3c0a19451a40ff415e29eba0f4396bf513aed293de Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.957281 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:06:35 crc kubenswrapper[4805]: W1203 00:06:35.974652 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42d6da4d_d781_4243_b5c3_28a8cf91ef53.slice/crio-f792d7927a63ef3e89b76a7717bdc795f5beb38ee5005104a3d4cd6678d1fa03 WatchSource:0}: Error finding container f792d7927a63ef3e89b76a7717bdc795f5beb38ee5005104a3d4cd6678d1fa03: Status 404 returned error can't find the container with id f792d7927a63ef3e89b76a7717bdc795f5beb38ee5005104a3d4cd6678d1fa03 Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.988379 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-6pggh"] Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.989163 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.989512 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-lllfh"] Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.989977 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lllfh" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.995775 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 00:06:35 crc kubenswrapper[4805]: I1203 00:06:35.995972 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.003387 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.003637 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.003754 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.004346 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.006214 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.022867 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.038755 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.054180 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.054317 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.054349 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:36 crc kubenswrapper[4805]: E1203 00:06:36.054419 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:06:38.054379433 +0000 UTC m=+21.903342039 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:06:36 crc kubenswrapper[4805]: E1203 00:06:36.054440 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:36 crc kubenswrapper[4805]: E1203 00:06:36.054482 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:36 crc kubenswrapper[4805]: E1203 00:06:36.054509 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:38.054492716 +0000 UTC m=+21.903455322 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:36 crc kubenswrapper[4805]: E1203 00:06:36.054545 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:38.054526257 +0000 UTC m=+21.903488863 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.058557 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.075942 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.103716 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.136283 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.154984 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v95vt\" (UniqueName: \"kubernetes.io/projected/64f3e4fa-808d-4e25-a03e-be11b8a1bcbc-kube-api-access-v95vt\") pod \"multus-additional-cni-plugins-6pggh\" (UID: \"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\") " pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155041 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-host-var-lib-kubelet\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155059 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-system-cni-dir\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155075 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/64f3e4fa-808d-4e25-a03e-be11b8a1bcbc-os-release\") pod \"multus-additional-cni-plugins-6pggh\" (UID: \"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\") " pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155100 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-hostroot\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155117 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-host-run-multus-certs\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155252 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-cnibin\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155301 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/64f3e4fa-808d-4e25-a03e-be11b8a1bcbc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6pggh\" (UID: \"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\") " pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155280 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155328 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155469 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-etc-kubernetes\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155487 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/839326a5-41df-492f-83c4-3ee9e2964dc8-cni-binary-copy\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155519 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/64f3e4fa-808d-4e25-a03e-be11b8a1bcbc-system-cni-dir\") pod \"multus-additional-cni-plugins-6pggh\" (UID: \"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\") " pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:36 crc kubenswrapper[4805]: E1203 00:06:36.155442 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155535 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/64f3e4fa-808d-4e25-a03e-be11b8a1bcbc-cnibin\") pod \"multus-additional-cni-plugins-6pggh\" (UID: \"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\") " pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155554 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-host-var-lib-cni-multus\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155578 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-multus-cni-dir\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: E1203 00:06:36.155579 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:36 crc kubenswrapper[4805]: E1203 00:06:36.155610 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155608 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-host-run-netns\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155662 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/64f3e4fa-808d-4e25-a03e-be11b8a1bcbc-cni-binary-copy\") pod \"multus-additional-cni-plugins-6pggh\" (UID: \"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\") " pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155687 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-multus-socket-dir-parent\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: E1203 00:06:36.155715 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:38.155690082 +0000 UTC m=+22.004652868 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155758 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-host-run-k8s-cni-cncf-io\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155823 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-multus-conf-dir\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155858 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-os-release\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155889 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/839326a5-41df-492f-83c4-3ee9e2964dc8-multus-daemon-config\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155928 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvffg\" (UniqueName: \"kubernetes.io/projected/839326a5-41df-492f-83c4-3ee9e2964dc8-kube-api-access-rvffg\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155955 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/64f3e4fa-808d-4e25-a03e-be11b8a1bcbc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6pggh\" (UID: \"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\") " pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.155997 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.156033 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-host-var-lib-cni-bin\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: E1203 00:06:36.156113 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:36 crc kubenswrapper[4805]: E1203 00:06:36.156133 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:36 crc kubenswrapper[4805]: E1203 00:06:36.156147 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:36 crc kubenswrapper[4805]: E1203 00:06:36.156211 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:38.156174454 +0000 UTC m=+22.005137060 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.167911 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.199995 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.241056 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257301 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-host-var-lib-cni-bin\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257344 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/839326a5-41df-492f-83c4-3ee9e2964dc8-multus-daemon-config\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257363 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvffg\" (UniqueName: \"kubernetes.io/projected/839326a5-41df-492f-83c4-3ee9e2964dc8-kube-api-access-rvffg\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257379 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/64f3e4fa-808d-4e25-a03e-be11b8a1bcbc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6pggh\" (UID: \"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\") " pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257434 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-host-var-lib-kubelet\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257451 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v95vt\" (UniqueName: \"kubernetes.io/projected/64f3e4fa-808d-4e25-a03e-be11b8a1bcbc-kube-api-access-v95vt\") pod \"multus-additional-cni-plugins-6pggh\" (UID: \"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\") " pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257471 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-system-cni-dir\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257485 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/64f3e4fa-808d-4e25-a03e-be11b8a1bcbc-os-release\") pod \"multus-additional-cni-plugins-6pggh\" (UID: \"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\") " pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257506 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-cnibin\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257522 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-hostroot\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257540 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-host-run-multus-certs\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257556 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/64f3e4fa-808d-4e25-a03e-be11b8a1bcbc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6pggh\" (UID: \"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\") " pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257577 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-etc-kubernetes\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257593 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/839326a5-41df-492f-83c4-3ee9e2964dc8-cni-binary-copy\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257607 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/64f3e4fa-808d-4e25-a03e-be11b8a1bcbc-system-cni-dir\") pod \"multus-additional-cni-plugins-6pggh\" (UID: \"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\") " pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257624 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/64f3e4fa-808d-4e25-a03e-be11b8a1bcbc-cnibin\") pod \"multus-additional-cni-plugins-6pggh\" (UID: \"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\") " pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257641 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-host-var-lib-cni-multus\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257657 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-multus-cni-dir\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257673 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-host-run-netns\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257691 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/64f3e4fa-808d-4e25-a03e-be11b8a1bcbc-cni-binary-copy\") pod \"multus-additional-cni-plugins-6pggh\" (UID: \"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\") " pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257710 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-multus-socket-dir-parent\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257726 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-host-run-k8s-cni-cncf-io\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257741 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-multus-conf-dir\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257755 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-os-release\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.257972 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-os-release\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.258012 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-host-var-lib-cni-bin\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.258655 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/839326a5-41df-492f-83c4-3ee9e2964dc8-multus-daemon-config\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.258696 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-etc-kubernetes\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.258813 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-host-run-netns\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.258842 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/64f3e4fa-808d-4e25-a03e-be11b8a1bcbc-system-cni-dir\") pod \"multus-additional-cni-plugins-6pggh\" (UID: \"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\") " pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.258893 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/64f3e4fa-808d-4e25-a03e-be11b8a1bcbc-cnibin\") pod \"multus-additional-cni-plugins-6pggh\" (UID: \"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\") " pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.258915 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-host-var-lib-cni-multus\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.258962 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-multus-cni-dir\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.258980 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-host-var-lib-kubelet\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.259003 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-multus-socket-dir-parent\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.259036 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-system-cni-dir\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.259092 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-cnibin\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.259085 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-host-run-multus-certs\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.259294 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/64f3e4fa-808d-4e25-a03e-be11b8a1bcbc-os-release\") pod \"multus-additional-cni-plugins-6pggh\" (UID: \"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\") " pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.259176 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-multus-conf-dir\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.259169 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-host-run-k8s-cni-cncf-io\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.259086 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/839326a5-41df-492f-83c4-3ee9e2964dc8-hostroot\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.259479 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/64f3e4fa-808d-4e25-a03e-be11b8a1bcbc-cni-binary-copy\") pod \"multus-additional-cni-plugins-6pggh\" (UID: \"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\") " pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.259479 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/839326a5-41df-492f-83c4-3ee9e2964dc8-cni-binary-copy\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.259586 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/64f3e4fa-808d-4e25-a03e-be11b8a1bcbc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6pggh\" (UID: \"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\") " pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.259787 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/64f3e4fa-808d-4e25-a03e-be11b8a1bcbc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6pggh\" (UID: \"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\") " pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.264532 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.287602 4805 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 03 00:06:36 crc kubenswrapper[4805]: E1203 00:06:36.288116 4805 projected.go:194] Error preparing data for projected volume kube-api-access-rvffg for pod openshift-multus/multus-lllfh: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/serviceaccounts/default/token": read tcp 38.102.83.130:52852->38.102.83.130:6443: use of closed network connection Dec 03 00:06:36 crc kubenswrapper[4805]: E1203 00:06:36.288265 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/839326a5-41df-492f-83c4-3ee9e2964dc8-kube-api-access-rvffg podName:839326a5-41df-492f-83c4-3ee9e2964dc8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:36.78823857 +0000 UTC m=+20.637201176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rvffg" (UniqueName: "kubernetes.io/projected/839326a5-41df-492f-83c4-3ee9e2964dc8-kube-api-access-rvffg") pod "multus-lllfh" (UID: "839326a5-41df-492f-83c4-3ee9e2964dc8") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/serviceaccounts/default/token": read tcp 38.102.83.130:52852->38.102.83.130:6443: use of closed network connection Dec 03 00:06:36 crc kubenswrapper[4805]: W1203 00:06:36.288366 4805 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Dec 03 00:06:36 crc kubenswrapper[4805]: W1203 00:06:36.288590 4805 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 03 00:06:36 crc kubenswrapper[4805]: W1203 00:06:36.289100 4805 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 03 00:06:36 crc kubenswrapper[4805]: W1203 00:06:36.289127 4805 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Dec 03 00:06:36 crc kubenswrapper[4805]: W1203 00:06:36.289133 4805 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Dec 03 00:06:36 crc kubenswrapper[4805]: W1203 00:06:36.288926 4805 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 03 00:06:36 crc kubenswrapper[4805]: W1203 00:06:36.288945 4805 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 03 00:06:36 crc kubenswrapper[4805]: W1203 00:06:36.289017 4805 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Dec 03 00:06:36 crc kubenswrapper[4805]: W1203 00:06:36.289038 4805 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 03 00:06:36 crc kubenswrapper[4805]: W1203 00:06:36.289213 4805 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Dec 03 00:06:36 crc kubenswrapper[4805]: W1203 00:06:36.289484 4805 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Dec 03 00:06:36 crc kubenswrapper[4805]: W1203 00:06:36.289610 4805 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.296980 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v95vt\" (UniqueName: \"kubernetes.io/projected/64f3e4fa-808d-4e25-a03e-be11b8a1bcbc-kube-api-access-v95vt\") pod \"multus-additional-cni-plugins-6pggh\" (UID: \"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\") " pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:36 crc kubenswrapper[4805]: W1203 00:06:36.297242 4805 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Dec 03 00:06:36 crc kubenswrapper[4805]: W1203 00:06:36.288825 4805 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Dec 03 00:06:36 crc kubenswrapper[4805]: W1203 00:06:36.299821 4805 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.314301 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6pggh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.318531 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.351490 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.385698 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k6pk5"] Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.386550 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.388981 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.389517 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.389633 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.389971 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.390109 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.392753 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.401883 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.407292 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.423368 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:36 crc kubenswrapper[4805]: E1203 00:06:36.423511 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.423714 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.423384 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:36 crc kubenswrapper[4805]: E1203 00:06:36.423842 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:36 crc kubenswrapper[4805]: E1203 00:06:36.423920 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.428264 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.428821 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.430448 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.431075 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.432363 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.433625 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.434326 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.435457 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.436079 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.437646 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.438571 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.440265 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.441945 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.442696 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.444638 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.445335 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.447025 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.447599 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.448347 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.449897 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.450562 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.451886 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.453273 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.454675 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.455984 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.456227 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.456851 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.458467 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.459190 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.460612 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.461479 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.462604 4805 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.462733 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.465350 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.466762 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.467392 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.469367 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.470266 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.471497 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.472290 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.473639 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.474157 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.475550 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.476406 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.477677 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.478282 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.479650 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.480465 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.481919 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.482570 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.483661 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.484491 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.485729 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.486460 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.487048 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.498578 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.536940 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.562284 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-run-ovn\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.562553 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-node-log\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.562626 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-run-ovn-kubernetes\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.562743 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-cni-bin\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.562810 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-systemd-units\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.562880 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-slash\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.562966 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-kubelet\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.563038 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-etc-openvswitch\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.563110 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2dbad567-2c97-49dd-ac90-41fd66a3b606-ovnkube-config\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.563180 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6mm9\" (UniqueName: \"kubernetes.io/projected/2dbad567-2c97-49dd-ac90-41fd66a3b606-kube-api-access-l6mm9\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.563281 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2dbad567-2c97-49dd-ac90-41fd66a3b606-env-overrides\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.563358 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-log-socket\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.563447 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-run-netns\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.563521 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-var-lib-openvswitch\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.563587 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-run-openvswitch\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.563768 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-cni-netd\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.563843 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.563896 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2dbad567-2c97-49dd-ac90-41fd66a3b606-ovn-node-metrics-cert\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.563922 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2dbad567-2c97-49dd-ac90-41fd66a3b606-ovnkube-script-lib\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.563947 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-run-systemd\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.574124 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.580502 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" event={"ID":"42d6da4d-d781-4243-b5c3-28a8cf91ef53","Type":"ContainerStarted","Data":"359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4"} Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.580567 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" event={"ID":"42d6da4d-d781-4243-b5c3-28a8cf91ef53","Type":"ContainerStarted","Data":"7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37"} Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.580579 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" event={"ID":"42d6da4d-d781-4243-b5c3-28a8cf91ef53","Type":"ContainerStarted","Data":"f792d7927a63ef3e89b76a7717bdc795f5beb38ee5005104a3d4cd6678d1fa03"} Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.581663 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" event={"ID":"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc","Type":"ContainerStarted","Data":"3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05"} Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.581698 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" event={"ID":"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc","Type":"ContainerStarted","Data":"3c518ecc7c359affe337647765989fcafaad723b2ad43a3e747a3d51e0b554d0"} Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.583959 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qh9zq" event={"ID":"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8","Type":"ContainerStarted","Data":"552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4"} Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.586952 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-29wnh" event={"ID":"60be52b4-e3f7-4b20-b854-5521ee573c09","Type":"ContainerStarted","Data":"34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7"} Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.586983 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-29wnh" event={"ID":"60be52b4-e3f7-4b20-b854-5521ee573c09","Type":"ContainerStarted","Data":"949300a10adf212665fbbe3c0a19451a40ff415e29eba0f4396bf513aed293de"} Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.613939 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.657879 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.664638 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-run-systemd\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.664781 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-run-ovn\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.664809 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-node-log\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.664836 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-run-ovn-kubernetes\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.664882 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-cni-bin\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.664912 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-systemd-units\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.664934 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-slash\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.664954 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-kubelet\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.664977 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-etc-openvswitch\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.664994 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2dbad567-2c97-49dd-ac90-41fd66a3b606-ovnkube-config\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.665010 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6mm9\" (UniqueName: \"kubernetes.io/projected/2dbad567-2c97-49dd-ac90-41fd66a3b606-kube-api-access-l6mm9\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.665029 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2dbad567-2c97-49dd-ac90-41fd66a3b606-env-overrides\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.665051 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-log-socket\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.665075 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-run-openvswitch\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.665090 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-cni-netd\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.665111 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.665128 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-systemd-units\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.665137 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-run-netns\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.664723 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-run-systemd\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.665171 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-var-lib-openvswitch\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.665191 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-run-ovn\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.665228 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2dbad567-2c97-49dd-ac90-41fd66a3b606-ovn-node-metrics-cert\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.665248 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-node-log\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.665261 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2dbad567-2c97-49dd-ac90-41fd66a3b606-ovnkube-script-lib\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.665276 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-run-ovn-kubernetes\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.665300 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-cni-bin\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.665727 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-log-socket\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.665757 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-slash\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.665778 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-kubelet\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.665799 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-etc-openvswitch\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.666072 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2dbad567-2c97-49dd-ac90-41fd66a3b606-ovnkube-script-lib\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.666209 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-run-netns\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.666239 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-run-openvswitch\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.666261 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-cni-netd\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.666286 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.666316 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-var-lib-openvswitch\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.666585 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2dbad567-2c97-49dd-ac90-41fd66a3b606-ovnkube-config\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.666697 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2dbad567-2c97-49dd-ac90-41fd66a3b606-env-overrides\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.670338 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2dbad567-2c97-49dd-ac90-41fd66a3b606-ovn-node-metrics-cert\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.695744 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.722860 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6mm9\" (UniqueName: \"kubernetes.io/projected/2dbad567-2c97-49dd-ac90-41fd66a3b606-kube-api-access-l6mm9\") pod \"ovnkube-node-k6pk5\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.756604 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.795216 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.836243 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.867876 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvffg\" (UniqueName: \"kubernetes.io/projected/839326a5-41df-492f-83c4-3ee9e2964dc8-kube-api-access-rvffg\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.877051 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.899319 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvffg\" (UniqueName: \"kubernetes.io/projected/839326a5-41df-492f-83c4-3ee9e2964dc8-kube-api-access-rvffg\") pod \"multus-lllfh\" (UID: \"839326a5-41df-492f-83c4-3ee9e2964dc8\") " pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.932093 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.935263 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lllfh" Dec 03 00:06:36 crc kubenswrapper[4805]: W1203 00:06:36.949776 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod839326a5_41df_492f_83c4_3ee9e2964dc8.slice/crio-d2c4bcff18c114bbb11c0d7cf32a936c3ef6a89aa77526bb844e26ceac271bd3 WatchSource:0}: Error finding container d2c4bcff18c114bbb11c0d7cf32a936c3ef6a89aa77526bb844e26ceac271bd3: Status 404 returned error can't find the container with id d2c4bcff18c114bbb11c0d7cf32a936c3ef6a89aa77526bb844e26ceac271bd3 Dec 03 00:06:36 crc kubenswrapper[4805]: I1203 00:06:36.979765 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.002615 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.016315 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:37 crc kubenswrapper[4805]: W1203 00:06:37.017124 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dbad567_2c97_49dd_ac90_41fd66a3b606.slice/crio-3bbe2aa9c6561c04bb61c96f8415981d1b789bcd8512595e3d832e2d6f157536 WatchSource:0}: Error finding container 3bbe2aa9c6561c04bb61c96f8415981d1b789bcd8512595e3d832e2d6f157536: Status 404 returned error can't find the container with id 3bbe2aa9c6561c04bb61c96f8415981d1b789bcd8512595e3d832e2d6f157536 Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.053867 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.097087 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.144007 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.144614 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.185664 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.216317 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.253702 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.265874 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.305587 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.332958 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.344424 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.365300 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.405782 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.432114 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.444802 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.464713 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.504523 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.531046 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.564798 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.596448 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lllfh" event={"ID":"839326a5-41df-492f-83c4-3ee9e2964dc8","Type":"ContainerStarted","Data":"861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1"} Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.596517 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lllfh" event={"ID":"839326a5-41df-492f-83c4-3ee9e2964dc8","Type":"ContainerStarted","Data":"d2c4bcff18c114bbb11c0d7cf32a936c3ef6a89aa77526bb844e26ceac271bd3"} Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.598327 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe"} Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.598295 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.599999 4805 generic.go:334] "Generic (PLEG): container finished" podID="64f3e4fa-808d-4e25-a03e-be11b8a1bcbc" containerID="3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05" exitCode=0 Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.600072 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" event={"ID":"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc","Type":"ContainerDied","Data":"3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05"} Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.601851 4805 generic.go:334] "Generic (PLEG): container finished" podID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerID="73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f" exitCode=0 Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.601896 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerDied","Data":"73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f"} Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.601954 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerStarted","Data":"3bbe2aa9c6561c04bb61c96f8415981d1b789bcd8512595e3d832e2d6f157536"} Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.637409 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.644901 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.685516 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.715147 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.755762 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.786331 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.803315 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.811132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.812137 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.812326 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.812604 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.817426 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.865842 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.886125 4805 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.886466 4805 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.887825 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.887972 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.888112 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.888227 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.888314 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:37Z","lastTransitionTime":"2025-12-03T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:37 crc kubenswrapper[4805]: E1203 00:06:37.906747 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.910829 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.910872 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.910881 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.910898 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.910910 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:37Z","lastTransitionTime":"2025-12-03T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.916663 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:37 crc kubenswrapper[4805]: E1203 00:06:37.923505 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.927403 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.927436 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.927445 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.927462 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.927473 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:37Z","lastTransitionTime":"2025-12-03T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:37 crc kubenswrapper[4805]: E1203 00:06:37.944599 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.952432 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.952497 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.952511 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.952539 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.952554 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:37Z","lastTransitionTime":"2025-12-03T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.958845 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:37 crc kubenswrapper[4805]: E1203 00:06:37.968833 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.973093 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.973138 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.973151 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.973170 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.973184 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:37Z","lastTransitionTime":"2025-12-03T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:37 crc kubenswrapper[4805]: E1203 00:06:37.989808 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:37 crc kubenswrapper[4805]: E1203 00:06:37.989975 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.992501 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.992573 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.992587 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.992612 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.992628 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:37Z","lastTransitionTime":"2025-12-03T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:37 crc kubenswrapper[4805]: I1203 00:06:37.994815 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:37Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.033680 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.073317 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.077049 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.081373 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.081475 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.081532 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:38 crc kubenswrapper[4805]: E1203 00:06:38.081644 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:06:42.081604704 +0000 UTC m=+25.930567380 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:06:38 crc kubenswrapper[4805]: E1203 00:06:38.081656 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:38 crc kubenswrapper[4805]: E1203 00:06:38.081712 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:38 crc kubenswrapper[4805]: E1203 00:06:38.081764 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:42.081753258 +0000 UTC m=+25.930716074 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:38 crc kubenswrapper[4805]: E1203 00:06:38.081793 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:42.081783529 +0000 UTC m=+25.930746345 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.088950 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.095353 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.095417 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.095431 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.095455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.095472 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:38Z","lastTransitionTime":"2025-12-03T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.098267 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.136023 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.177489 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.182538 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.182625 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:38 crc kubenswrapper[4805]: E1203 00:06:38.182777 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:38 crc kubenswrapper[4805]: E1203 00:06:38.182829 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:38 crc kubenswrapper[4805]: E1203 00:06:38.182849 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:38 crc kubenswrapper[4805]: E1203 00:06:38.182787 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:38 crc kubenswrapper[4805]: E1203 00:06:38.182913 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:42.182888832 +0000 UTC m=+26.031851438 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:38 crc kubenswrapper[4805]: E1203 00:06:38.182927 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:38 crc kubenswrapper[4805]: E1203 00:06:38.182942 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:38 crc kubenswrapper[4805]: E1203 00:06:38.182998 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:42.182978714 +0000 UTC m=+26.031941320 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.198131 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.198162 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.198171 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.198186 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.198217 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:38Z","lastTransitionTime":"2025-12-03T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.236213 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.277412 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.300607 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.300666 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.300678 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.300699 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.300714 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:38Z","lastTransitionTime":"2025-12-03T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.319158 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.337608 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.380566 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.403601 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.403653 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.403665 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.403684 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.403696 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:38Z","lastTransitionTime":"2025-12-03T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.416463 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.423169 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.423353 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:38 crc kubenswrapper[4805]: E1203 00:06:38.423438 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:38 crc kubenswrapper[4805]: E1203 00:06:38.423557 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.423712 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:38 crc kubenswrapper[4805]: E1203 00:06:38.423814 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.455720 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.491943 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.507087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.507139 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.507154 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.507173 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.507187 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:38Z","lastTransitionTime":"2025-12-03T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.538505 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.578464 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.607019 4805 generic.go:334] "Generic (PLEG): container finished" podID="64f3e4fa-808d-4e25-a03e-be11b8a1bcbc" containerID="066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d" exitCode=0 Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.607103 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" event={"ID":"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc","Type":"ContainerDied","Data":"066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d"} Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.609904 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.609933 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.609942 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.609959 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.609971 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:38Z","lastTransitionTime":"2025-12-03T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.612677 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerStarted","Data":"07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d"} Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.612740 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerStarted","Data":"13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512"} Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.612763 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerStarted","Data":"a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9"} Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.612776 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerStarted","Data":"5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15"} Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.612788 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerStarted","Data":"f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d"} Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.612799 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerStarted","Data":"c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307"} Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.620669 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: E1203 00:06:38.633355 4805 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.674535 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.713653 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.713683 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.713693 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.713708 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.713717 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:38Z","lastTransitionTime":"2025-12-03T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.715164 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.756158 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.807241 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.816093 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.816427 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.816459 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.816477 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.816496 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:38Z","lastTransitionTime":"2025-12-03T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.849259 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.873251 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.919465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.919507 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.919518 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.919532 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.919544 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:38Z","lastTransitionTime":"2025-12-03T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.921309 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.957104 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:38 crc kubenswrapper[4805]: I1203 00:06:38.996309 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:38Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.022027 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.022361 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.022479 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.022605 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.022698 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:39Z","lastTransitionTime":"2025-12-03T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.041631 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.074681 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.113723 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.124983 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.125031 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.125043 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.125059 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.125069 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:39Z","lastTransitionTime":"2025-12-03T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.159332 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.195459 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.227611 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.227658 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.227671 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.227690 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.227703 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:39Z","lastTransitionTime":"2025-12-03T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.233529 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.273292 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.311884 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.330425 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.330476 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.330485 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.330502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.330512 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:39Z","lastTransitionTime":"2025-12-03T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.352837 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.398517 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.432392 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.433978 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.434011 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.434026 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.434047 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.434059 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:39Z","lastTransitionTime":"2025-12-03T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.472660 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.525812 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.536965 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.537022 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.537033 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.537050 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.537060 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:39Z","lastTransitionTime":"2025-12-03T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.553462 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.596525 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.617926 4805 generic.go:334] "Generic (PLEG): container finished" podID="64f3e4fa-808d-4e25-a03e-be11b8a1bcbc" containerID="68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132" exitCode=0 Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.618050 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" event={"ID":"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc","Type":"ContainerDied","Data":"68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132"} Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.638244 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.640033 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.640082 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.640136 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.640162 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.640179 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:39Z","lastTransitionTime":"2025-12-03T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.673234 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.712648 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.743235 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.743279 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.743291 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.743311 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.743325 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:39Z","lastTransitionTime":"2025-12-03T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.751436 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.804570 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.833430 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.845964 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.846018 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.846032 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.846056 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.846073 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:39Z","lastTransitionTime":"2025-12-03T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.877695 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.919214 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.949267 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.949321 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.949337 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.949359 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.949372 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:39Z","lastTransitionTime":"2025-12-03T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.955347 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:39 crc kubenswrapper[4805]: I1203 00:06:39.992926 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.033654 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.051993 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.052041 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.052052 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.052072 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.052083 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:40Z","lastTransitionTime":"2025-12-03T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.072982 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.114813 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.155032 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.155087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.155098 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.155117 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.155130 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:40Z","lastTransitionTime":"2025-12-03T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.159164 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.197590 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.258299 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.258340 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.258348 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.258362 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.258374 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:40Z","lastTransitionTime":"2025-12-03T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.360633 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.360682 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.360692 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.360707 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.360720 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:40Z","lastTransitionTime":"2025-12-03T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.423178 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.423271 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:40 crc kubenswrapper[4805]: E1203 00:06:40.423405 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.423460 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:40 crc kubenswrapper[4805]: E1203 00:06:40.423514 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:40 crc kubenswrapper[4805]: E1203 00:06:40.423651 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.463101 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.463149 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.463158 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.463173 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.463183 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:40Z","lastTransitionTime":"2025-12-03T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.566186 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.566265 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.566285 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.566303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.566316 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:40Z","lastTransitionTime":"2025-12-03T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.622989 4805 generic.go:334] "Generic (PLEG): container finished" podID="64f3e4fa-808d-4e25-a03e-be11b8a1bcbc" containerID="f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892" exitCode=0 Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.623044 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" event={"ID":"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc","Type":"ContainerDied","Data":"f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892"} Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.658791 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.668606 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.668656 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.668672 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.668698 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.668711 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:40Z","lastTransitionTime":"2025-12-03T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.677429 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.695467 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.737709 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.760293 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.778375 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.778405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.778414 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.778429 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.778439 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:40Z","lastTransitionTime":"2025-12-03T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.802286 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.828637 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.844713 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.861518 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.876739 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.882017 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.882046 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.882054 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.882068 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.882078 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:40Z","lastTransitionTime":"2025-12-03T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.890921 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.905305 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.919509 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.931684 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.943789 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.984767 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.984809 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.984818 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.984834 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:40 crc kubenswrapper[4805]: I1203 00:06:40.984843 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:40Z","lastTransitionTime":"2025-12-03T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.087794 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.087845 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.087856 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.087873 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.087883 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.191269 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.191310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.191318 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.191336 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.191347 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.294640 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.294695 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.294708 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.294730 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.294742 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.397676 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.397766 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.397781 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.397803 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.397830 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.500260 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.500294 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.500302 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.500317 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.500330 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.603838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.604331 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.604420 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.604502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.604609 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.634893 4805 generic.go:334] "Generic (PLEG): container finished" podID="64f3e4fa-808d-4e25-a03e-be11b8a1bcbc" containerID="f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a" exitCode=0 Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.635008 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" event={"ID":"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc","Type":"ContainerDied","Data":"f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a"} Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.642817 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerStarted","Data":"f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0"} Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.653782 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.672486 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.693142 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.707626 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.708745 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.708787 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.708802 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.708821 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.708830 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.728913 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.743152 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.761581 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.780507 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.794721 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.811369 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.812105 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.812144 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.812153 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.812169 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.812179 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.834441 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.849676 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.863555 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.876520 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.886344 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.915830 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.915880 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.915891 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.915908 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:41 crc kubenswrapper[4805]: I1203 00:06:41.915921 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:41Z","lastTransitionTime":"2025-12-03T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.023533 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.024096 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.024393 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.024626 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.024781 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:42Z","lastTransitionTime":"2025-12-03T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.128634 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.128699 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.128715 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.128737 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.128752 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:42Z","lastTransitionTime":"2025-12-03T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.131227 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:42 crc kubenswrapper[4805]: E1203 00:06:42.131373 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:06:50.131349827 +0000 UTC m=+33.980312433 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.131409 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.131436 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:42 crc kubenswrapper[4805]: E1203 00:06:42.131521 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:42 crc kubenswrapper[4805]: E1203 00:06:42.131593 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:50.131576454 +0000 UTC m=+33.980539070 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:42 crc kubenswrapper[4805]: E1203 00:06:42.131533 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:42 crc kubenswrapper[4805]: E1203 00:06:42.131664 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:50.131654366 +0000 UTC m=+33.980616972 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.231762 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.231834 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:42 crc kubenswrapper[4805]: E1203 00:06:42.231947 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:42 crc kubenswrapper[4805]: E1203 00:06:42.231965 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:42 crc kubenswrapper[4805]: E1203 00:06:42.231977 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:42 crc kubenswrapper[4805]: E1203 00:06:42.231947 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:42 crc kubenswrapper[4805]: E1203 00:06:42.231997 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:42 crc kubenswrapper[4805]: E1203 00:06:42.232009 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:42 crc kubenswrapper[4805]: E1203 00:06:42.232021 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:50.23200104 +0000 UTC m=+34.080963646 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:42 crc kubenswrapper[4805]: E1203 00:06:42.232050 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:50.232037701 +0000 UTC m=+34.081000317 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.232125 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.232149 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.232157 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.232169 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.232180 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:42Z","lastTransitionTime":"2025-12-03T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.333992 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.334031 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.334041 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.334054 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.334063 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:42Z","lastTransitionTime":"2025-12-03T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.422744 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.422781 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:42 crc kubenswrapper[4805]: E1203 00:06:42.423107 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.423175 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:42 crc kubenswrapper[4805]: E1203 00:06:42.423338 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:42 crc kubenswrapper[4805]: E1203 00:06:42.423441 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.436427 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.436465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.436474 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.436489 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.436501 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:42Z","lastTransitionTime":"2025-12-03T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.539703 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.539753 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.539771 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.539788 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.539801 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:42Z","lastTransitionTime":"2025-12-03T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.642768 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.642800 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.642810 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.642823 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.642833 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:42Z","lastTransitionTime":"2025-12-03T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.650105 4805 generic.go:334] "Generic (PLEG): container finished" podID="64f3e4fa-808d-4e25-a03e-be11b8a1bcbc" containerID="7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc" exitCode=0 Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.650153 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" event={"ID":"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc","Type":"ContainerDied","Data":"7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc"} Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.665815 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.683462 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.701800 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.717655 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.732227 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.747418 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.751380 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.751423 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.751434 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.751458 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.751471 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:42Z","lastTransitionTime":"2025-12-03T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.760789 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.773653 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.796642 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.808393 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.821365 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.835084 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.851104 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.853975 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.854044 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.854056 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.854070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.854080 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:42Z","lastTransitionTime":"2025-12-03T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.866019 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.886732 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:42Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.956838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.956885 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.956896 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.956911 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:42 crc kubenswrapper[4805]: I1203 00:06:42.956924 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:42Z","lastTransitionTime":"2025-12-03T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.059962 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.060530 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.060710 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.060863 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.061035 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:43Z","lastTransitionTime":"2025-12-03T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.164652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.164700 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.164711 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.164728 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.164739 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:43Z","lastTransitionTime":"2025-12-03T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.267276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.267316 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.267328 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.267344 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.267357 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:43Z","lastTransitionTime":"2025-12-03T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.370539 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.370589 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.370602 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.370616 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.370627 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:43Z","lastTransitionTime":"2025-12-03T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.474136 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.474230 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.474250 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.474280 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.474299 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:43Z","lastTransitionTime":"2025-12-03T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.577155 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.577209 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.577223 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.577242 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.577256 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:43Z","lastTransitionTime":"2025-12-03T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.656888 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerStarted","Data":"a139aff41459f06af9f3154fec05a23df010962ae68deb2fb6c663a135608fd1"} Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.657327 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.657430 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.657473 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.673033 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" event={"ID":"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc","Type":"ContainerStarted","Data":"892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb"} Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.677583 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.679915 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.679956 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.679965 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.679985 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.679998 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:43Z","lastTransitionTime":"2025-12-03T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.693251 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.694041 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.694327 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.712501 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.728037 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.741856 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.758576 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.777736 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a139aff41459f06af9f3154fec05a23df010962ae68deb2fb6c663a135608fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.791636 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.803491 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.814741 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.815701 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.815767 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.815785 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.815813 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.815832 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:43Z","lastTransitionTime":"2025-12-03T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.828293 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.842930 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.861114 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.875389 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.888526 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.900606 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.911528 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.918510 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.918552 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.918566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.918582 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.918597 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:43Z","lastTransitionTime":"2025-12-03T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.923476 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.934281 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.953025 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.964174 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.980832 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:43 crc kubenswrapper[4805]: I1203 00:06:43.994748 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:43Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.006555 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.017678 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.021848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.021879 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.021888 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.021902 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.021911 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:44Z","lastTransitionTime":"2025-12-03T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.035725 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a139aff41459f06af9f3154fec05a23df010962ae68deb2fb6c663a135608fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.048433 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.061619 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.074284 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.087867 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:44Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.124779 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.125121 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.125229 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.125354 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.125448 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:44Z","lastTransitionTime":"2025-12-03T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.227654 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.227696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.227708 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.227724 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.227738 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:44Z","lastTransitionTime":"2025-12-03T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.330593 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.330633 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.330644 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.330660 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.330671 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:44Z","lastTransitionTime":"2025-12-03T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.422809 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:44 crc kubenswrapper[4805]: E1203 00:06:44.422972 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.423132 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:44 crc kubenswrapper[4805]: E1203 00:06:44.423300 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.423362 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:44 crc kubenswrapper[4805]: E1203 00:06:44.423423 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.433084 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.433112 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.433120 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.433132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.433142 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:44Z","lastTransitionTime":"2025-12-03T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.535767 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.535814 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.535823 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.535840 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.535855 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:44Z","lastTransitionTime":"2025-12-03T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.638309 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.638357 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.638368 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.638387 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.638399 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:44Z","lastTransitionTime":"2025-12-03T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.740935 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.740976 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.740988 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.741005 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.741014 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:44Z","lastTransitionTime":"2025-12-03T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.843542 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.843602 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.843612 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.843624 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.843634 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:44Z","lastTransitionTime":"2025-12-03T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.948037 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.948080 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.948091 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.948107 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:44 crc kubenswrapper[4805]: I1203 00:06:44.948120 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:44Z","lastTransitionTime":"2025-12-03T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.051016 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.051064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.051074 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.051092 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.051106 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:45Z","lastTransitionTime":"2025-12-03T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.154733 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.154785 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.154795 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.154811 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.154821 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:45Z","lastTransitionTime":"2025-12-03T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.257063 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.257094 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.257102 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.257115 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.257124 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:45Z","lastTransitionTime":"2025-12-03T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.360217 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.360249 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.360257 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.360273 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.360283 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:45Z","lastTransitionTime":"2025-12-03T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.463651 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.463680 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.463688 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.463702 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.463711 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:45Z","lastTransitionTime":"2025-12-03T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.565748 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.565782 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.565825 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.565842 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.565851 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:45Z","lastTransitionTime":"2025-12-03T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.667806 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.667855 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.667866 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.667883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.667896 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:45Z","lastTransitionTime":"2025-12-03T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.769666 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.769705 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.769713 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.769730 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.769739 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:45Z","lastTransitionTime":"2025-12-03T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.872309 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.872363 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.872376 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.872395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.872410 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:45Z","lastTransitionTime":"2025-12-03T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.974343 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.974380 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.974388 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.974403 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:45 crc kubenswrapper[4805]: I1203 00:06:45.974412 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:45Z","lastTransitionTime":"2025-12-03T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.076543 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.076592 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.076605 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.076624 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.076636 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:46Z","lastTransitionTime":"2025-12-03T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.179871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.179908 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.179917 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.179934 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.179946 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:46Z","lastTransitionTime":"2025-12-03T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.292408 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.292454 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.292467 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.292486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.292502 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:46Z","lastTransitionTime":"2025-12-03T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.394781 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.394828 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.394837 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.394851 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.394860 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:46Z","lastTransitionTime":"2025-12-03T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.423255 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.423376 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.423476 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:46 crc kubenswrapper[4805]: E1203 00:06:46.423600 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:46 crc kubenswrapper[4805]: E1203 00:06:46.423677 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:46 crc kubenswrapper[4805]: E1203 00:06:46.423716 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.438645 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.449040 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.457783 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.469459 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.483343 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.496696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.496741 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.496753 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.496773 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.496787 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:46Z","lastTransitionTime":"2025-12-03T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.502432 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.516966 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.529799 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.548385 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a139aff41459f06af9f3154fec05a23df010962ae68deb2fb6c663a135608fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.569031 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.587179 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.599962 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.600283 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.600302 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.600311 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.600326 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.600335 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:46Z","lastTransitionTime":"2025-12-03T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.611408 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.624785 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.635476 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.702542 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.702622 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.702635 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.702651 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.702664 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:46Z","lastTransitionTime":"2025-12-03T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.805751 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.805788 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.805800 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.805816 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.805829 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:46Z","lastTransitionTime":"2025-12-03T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.908610 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.908664 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.908680 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.908706 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:46 crc kubenswrapper[4805]: I1203 00:06:46.908722 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:46Z","lastTransitionTime":"2025-12-03T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.011722 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.012036 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.012158 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.012355 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.012481 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:47Z","lastTransitionTime":"2025-12-03T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.115615 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.115659 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.115667 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.115681 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.115691 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:47Z","lastTransitionTime":"2025-12-03T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.218550 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.218618 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.218630 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.218651 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.218675 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:47Z","lastTransitionTime":"2025-12-03T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.320989 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.321027 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.321036 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.321051 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.321061 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:47Z","lastTransitionTime":"2025-12-03T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.423285 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.423326 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.423336 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.423353 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.423364 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:47Z","lastTransitionTime":"2025-12-03T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.526667 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.526723 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.526738 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.526756 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.526771 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:47Z","lastTransitionTime":"2025-12-03T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.629592 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.629650 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.629666 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.629690 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.629710 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:47Z","lastTransitionTime":"2025-12-03T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.687024 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6pk5_2dbad567-2c97-49dd-ac90-41fd66a3b606/ovnkube-controller/0.log" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.689405 4805 generic.go:334] "Generic (PLEG): container finished" podID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerID="a139aff41459f06af9f3154fec05a23df010962ae68deb2fb6c663a135608fd1" exitCode=1 Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.689449 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerDied","Data":"a139aff41459f06af9f3154fec05a23df010962ae68deb2fb6c663a135608fd1"} Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.690525 4805 scope.go:117] "RemoveContainer" containerID="a139aff41459f06af9f3154fec05a23df010962ae68deb2fb6c663a135608fd1" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.714710 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.732450 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.732522 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.732541 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.732560 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.732572 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:47Z","lastTransitionTime":"2025-12-03T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.739919 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.753185 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.768588 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.790638 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a139aff41459f06af9f3154fec05a23df010962ae68deb2fb6c663a135608fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a139aff41459f06af9f3154fec05a23df010962ae68deb2fb6c663a135608fd1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:46Z\\\",\\\"message\\\":\\\"service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00734d6b7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operator,},ClusterIP:10.217.5.150,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.150],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1203 00:06:46.329733 6105 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.805131 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.818267 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.835749 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.835815 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.835828 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.835894 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.835910 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:47Z","lastTransitionTime":"2025-12-03T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.843397 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.856845 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.877707 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.900812 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.928905 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.939666 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.939709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.939724 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.939740 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.939750 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:47Z","lastTransitionTime":"2025-12-03T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.940318 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.949727 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:47 crc kubenswrapper[4805]: I1203 00:06:47.962179 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.042862 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.042935 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.042952 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.042971 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.043012 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.110525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.110567 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.110577 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.110594 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.110606 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4805]: E1203 00:06:48.122610 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.125976 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.126017 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.126074 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.126095 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.126109 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4805]: E1203 00:06:48.138315 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.141563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.141629 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.141642 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.141660 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.141675 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4805]: E1203 00:06:48.155558 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.159031 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.159060 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.159071 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.159105 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.159115 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4805]: E1203 00:06:48.169708 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.173491 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.173530 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.173541 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.173557 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.173570 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4805]: E1203 00:06:48.185314 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: E1203 00:06:48.185461 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.187282 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.187362 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.187390 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.187405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.187414 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.290108 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.290148 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.290160 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.290277 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.290296 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.393044 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.393371 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.393384 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.393422 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.393436 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.422669 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.422751 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:48 crc kubenswrapper[4805]: E1203 00:06:48.422815 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:48 crc kubenswrapper[4805]: E1203 00:06:48.422949 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.423052 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:48 crc kubenswrapper[4805]: E1203 00:06:48.423132 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.495895 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.495922 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.495939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.495959 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.495970 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.552041 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n"] Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.552462 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.556026 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.556322 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.569823 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.582326 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.595292 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.598100 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.598139 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.598153 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.598170 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.598182 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.606327 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.619721 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x476n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.628222 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlknv\" (UniqueName: \"kubernetes.io/projected/0a3f0b73-6cf4-4477-a7fd-0627f96339ab-kube-api-access-rlknv\") pod \"ovnkube-control-plane-749d76644c-x476n\" (UID: \"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.628300 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0a3f0b73-6cf4-4477-a7fd-0627f96339ab-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x476n\" (UID: \"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.628322 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0a3f0b73-6cf4-4477-a7fd-0627f96339ab-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x476n\" (UID: \"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.628357 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0a3f0b73-6cf4-4477-a7fd-0627f96339ab-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x476n\" (UID: \"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.633727 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.651374 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.671393 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.686790 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.694762 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6pk5_2dbad567-2c97-49dd-ac90-41fd66a3b606/ovnkube-controller/0.log" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.697627 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerStarted","Data":"7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191"} Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.697943 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.699746 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.699801 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.699818 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.699835 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.699849 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.706434 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.726778 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a139aff41459f06af9f3154fec05a23df010962ae68deb2fb6c663a135608fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a139aff41459f06af9f3154fec05a23df010962ae68deb2fb6c663a135608fd1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:46Z\\\",\\\"message\\\":\\\"service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00734d6b7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operator,},ClusterIP:10.217.5.150,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.150],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1203 00:06:46.329733 6105 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.729304 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0a3f0b73-6cf4-4477-a7fd-0627f96339ab-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x476n\" (UID: \"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.729426 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlknv\" (UniqueName: \"kubernetes.io/projected/0a3f0b73-6cf4-4477-a7fd-0627f96339ab-kube-api-access-rlknv\") pod \"ovnkube-control-plane-749d76644c-x476n\" (UID: \"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.729556 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0a3f0b73-6cf4-4477-a7fd-0627f96339ab-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x476n\" (UID: \"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.729590 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0a3f0b73-6cf4-4477-a7fd-0627f96339ab-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x476n\" (UID: \"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.729995 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0a3f0b73-6cf4-4477-a7fd-0627f96339ab-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x476n\" (UID: \"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.730708 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0a3f0b73-6cf4-4477-a7fd-0627f96339ab-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x476n\" (UID: \"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.735338 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0a3f0b73-6cf4-4477-a7fd-0627f96339ab-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x476n\" (UID: \"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.741324 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.747507 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlknv\" (UniqueName: \"kubernetes.io/projected/0a3f0b73-6cf4-4477-a7fd-0627f96339ab-kube-api-access-rlknv\") pod \"ovnkube-control-plane-749d76644c-x476n\" (UID: \"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.756493 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.769438 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.779834 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.794537 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.803848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.803888 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.803900 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.803918 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.803929 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.813332 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.829426 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.851503 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.863177 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.866528 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: W1203 00:06:48.879843 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a3f0b73_6cf4_4477_a7fd_0627f96339ab.slice/crio-d0c328f28074aa460ad837bb399f9d22dcab5ea02e7d59b2c41da81ac61fa67c WatchSource:0}: Error finding container d0c328f28074aa460ad837bb399f9d22dcab5ea02e7d59b2c41da81ac61fa67c: Status 404 returned error can't find the container with id d0c328f28074aa460ad837bb399f9d22dcab5ea02e7d59b2c41da81ac61fa67c Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.884388 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.903047 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a139aff41459f06af9f3154fec05a23df010962ae68deb2fb6c663a135608fd1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:46Z\\\",\\\"message\\\":\\\"service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00734d6b7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operator,},ClusterIP:10.217.5.150,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.150],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1203 00:06:46.329733 6105 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.907507 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.907537 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.907546 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.907561 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.907571 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:48Z","lastTransitionTime":"2025-12-03T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.922967 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.936152 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.947440 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.957404 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.969558 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.980927 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:48 crc kubenswrapper[4805]: I1203 00:06:48.992827 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.005048 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.009776 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.009814 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.009822 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.009836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.009846 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:49Z","lastTransitionTime":"2025-12-03T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.016418 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.029154 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x476n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.113064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.113103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.113112 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.113125 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.113137 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:49Z","lastTransitionTime":"2025-12-03T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.215818 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.215864 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.215884 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.215903 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.215918 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:49Z","lastTransitionTime":"2025-12-03T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.318577 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.318617 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.318627 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.318649 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.318660 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:49Z","lastTransitionTime":"2025-12-03T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.421492 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.421540 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.421555 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.421572 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.421584 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:49Z","lastTransitionTime":"2025-12-03T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.524616 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.524685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.524696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.524714 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.524729 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:49Z","lastTransitionTime":"2025-12-03T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.627908 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.627958 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.627970 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.627988 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.628003 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:49Z","lastTransitionTime":"2025-12-03T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.641254 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-q4nqx"] Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.641835 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:06:49 crc kubenswrapper[4805]: E1203 00:06:49.641913 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.660139 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.670167 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.684532 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.696595 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.702832 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6pk5_2dbad567-2c97-49dd-ac90-41fd66a3b606/ovnkube-controller/1.log" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.703514 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6pk5_2dbad567-2c97-49dd-ac90-41fd66a3b606/ovnkube-controller/0.log" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.706181 4805 generic.go:334] "Generic (PLEG): container finished" podID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerID="7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191" exitCode=1 Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.706268 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerDied","Data":"7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191"} Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.706323 4805 scope.go:117] "RemoveContainer" containerID="a139aff41459f06af9f3154fec05a23df010962ae68deb2fb6c663a135608fd1" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.707018 4805 scope.go:117] "RemoveContainer" containerID="7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191" Dec 03 00:06:49 crc kubenswrapper[4805]: E1203 00:06:49.707187 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-k6pk5_openshift-ovn-kubernetes(2dbad567-2c97-49dd-ac90-41fd66a3b606)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.708977 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" event={"ID":"0a3f0b73-6cf4-4477-a7fd-0627f96339ab","Type":"ContainerStarted","Data":"a71768f7cf0016d2b7a206f0536cdecd4d112e64661010318877f2601b9ebbd7"} Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.709110 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" event={"ID":"0a3f0b73-6cf4-4477-a7fd-0627f96339ab","Type":"ContainerStarted","Data":"c1aeebcab06f1b267bc2e583a2fd8f15328de9c1767ba1d26c75641699f41e68"} Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.709190 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" event={"ID":"0a3f0b73-6cf4-4477-a7fd-0627f96339ab","Type":"ContainerStarted","Data":"d0c328f28074aa460ad837bb399f9d22dcab5ea02e7d59b2c41da81ac61fa67c"} Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.713098 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.726281 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.730739 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.730769 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.730779 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.730793 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.730803 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:49Z","lastTransitionTime":"2025-12-03T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.744802 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a139aff41459f06af9f3154fec05a23df010962ae68deb2fb6c663a135608fd1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:46Z\\\",\\\"message\\\":\\\"service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00734d6b7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operator,},ClusterIP:10.217.5.150,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.150],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1203 00:06:46.329733 6105 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.757015 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.768546 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.780424 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.788770 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.797134 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.806114 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x476n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.816436 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829c74e-7807-4b31-9b2a-2482ec95a235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q4nqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.828953 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.832727 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.832771 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.832780 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.832796 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.832806 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:49Z","lastTransitionTime":"2025-12-03T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.839546 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjzpm\" (UniqueName: \"kubernetes.io/projected/3829c74e-7807-4b31-9b2a-2482ec95a235-kube-api-access-zjzpm\") pod \"network-metrics-daemon-q4nqx\" (UID: \"3829c74e-7807-4b31-9b2a-2482ec95a235\") " pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.839617 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs\") pod \"network-metrics-daemon-q4nqx\" (UID: \"3829c74e-7807-4b31-9b2a-2482ec95a235\") " pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.841243 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.852498 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.864997 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.879781 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.893034 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.910341 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a139aff41459f06af9f3154fec05a23df010962ae68deb2fb6c663a135608fd1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:46Z\\\",\\\"message\\\":\\\"service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00734d6b7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operator,},ClusterIP:10.217.5.150,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.150],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1203 00:06:46.329733 6105 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\" 6246 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 00:06:48.463629 6246 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1203 00:06:48.463633 6246 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1203 00:06:48.463637 6246 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 00:06:48.463656 6246 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-29wnh\\\\nI1203 00:06:48.463666 6246 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-29wnh\\\\nF1203 00:06:48.463664 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.922046 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.933417 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.935019 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.935043 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.935051 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.935065 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.935076 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:49Z","lastTransitionTime":"2025-12-03T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.940977 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjzpm\" (UniqueName: \"kubernetes.io/projected/3829c74e-7807-4b31-9b2a-2482ec95a235-kube-api-access-zjzpm\") pod \"network-metrics-daemon-q4nqx\" (UID: \"3829c74e-7807-4b31-9b2a-2482ec95a235\") " pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.941026 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs\") pod \"network-metrics-daemon-q4nqx\" (UID: \"3829c74e-7807-4b31-9b2a-2482ec95a235\") " pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:06:49 crc kubenswrapper[4805]: E1203 00:06:49.941194 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:06:49 crc kubenswrapper[4805]: E1203 00:06:49.941285 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs podName:3829c74e-7807-4b31-9b2a-2482ec95a235 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:50.441264401 +0000 UTC m=+34.290227007 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs") pod "network-metrics-daemon-q4nqx" (UID: "3829c74e-7807-4b31-9b2a-2482ec95a235") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.943775 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.951753 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.958034 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjzpm\" (UniqueName: \"kubernetes.io/projected/3829c74e-7807-4b31-9b2a-2482ec95a235-kube-api-access-zjzpm\") pod \"network-metrics-daemon-q4nqx\" (UID: \"3829c74e-7807-4b31-9b2a-2482ec95a235\") " pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.960784 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.970135 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aeebcab06f1b267bc2e583a2fd8f15328de9c1767ba1d26c75641699f41e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71768f7cf0016d2b7a206f0536cdecd4d112e64661010318877f2601b9ebbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x476n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.979449 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829c74e-7807-4b31-9b2a-2482ec95a235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q4nqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:49 crc kubenswrapper[4805]: I1203 00:06:49.991365 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.003360 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.013289 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.029788 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.037761 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.037795 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.037804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.037816 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.037825 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:50Z","lastTransitionTime":"2025-12-03T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.040380 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.054344 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.140321 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.140358 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.140366 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.140378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.140388 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:50Z","lastTransitionTime":"2025-12-03T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.142688 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.142744 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.142784 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:50 crc kubenswrapper[4805]: E1203 00:06:50.142836 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:07:06.142815512 +0000 UTC m=+49.991778128 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:06:50 crc kubenswrapper[4805]: E1203 00:06:50.142843 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:50 crc kubenswrapper[4805]: E1203 00:06:50.142885 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:06.142874683 +0000 UTC m=+49.991837289 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:06:50 crc kubenswrapper[4805]: E1203 00:06:50.142890 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:50 crc kubenswrapper[4805]: E1203 00:06:50.142925 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:06.142915374 +0000 UTC m=+49.991877980 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.243279 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.243329 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.243399 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.243444 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:50 crc kubenswrapper[4805]: E1203 00:06:50.243452 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:50 crc kubenswrapper[4805]: E1203 00:06:50.243471 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:50 crc kubenswrapper[4805]: E1203 00:06:50.243483 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:50 crc kubenswrapper[4805]: E1203 00:06:50.243494 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:06:50 crc kubenswrapper[4805]: E1203 00:06:50.243520 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:06:50 crc kubenswrapper[4805]: E1203 00:06:50.243527 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:06.243510925 +0000 UTC m=+50.092473531 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.243458 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:50 crc kubenswrapper[4805]: E1203 00:06:50.243538 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.243572 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.243586 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:50Z","lastTransitionTime":"2025-12-03T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:50 crc kubenswrapper[4805]: E1203 00:06:50.243629 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:06.243604218 +0000 UTC m=+50.092566904 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.345096 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.345140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.345151 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.345184 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.345213 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:50Z","lastTransitionTime":"2025-12-03T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.422916 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.423065 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:50 crc kubenswrapper[4805]: E1203 00:06:50.423150 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.423233 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:50 crc kubenswrapper[4805]: E1203 00:06:50.423350 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:50 crc kubenswrapper[4805]: E1203 00:06:50.423427 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.444577 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs\") pod \"network-metrics-daemon-q4nqx\" (UID: \"3829c74e-7807-4b31-9b2a-2482ec95a235\") " pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:06:50 crc kubenswrapper[4805]: E1203 00:06:50.444708 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:06:50 crc kubenswrapper[4805]: E1203 00:06:50.444761 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs podName:3829c74e-7807-4b31-9b2a-2482ec95a235 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:51.444743578 +0000 UTC m=+35.293706184 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs") pod "network-metrics-daemon-q4nqx" (UID: "3829c74e-7807-4b31-9b2a-2482ec95a235") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.446837 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.446868 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.446878 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.446893 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.446906 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:50Z","lastTransitionTime":"2025-12-03T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.550759 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.550852 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.550879 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.550910 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.550956 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:50Z","lastTransitionTime":"2025-12-03T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.659730 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.659796 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.659804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.659838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.659848 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:50Z","lastTransitionTime":"2025-12-03T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.714830 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6pk5_2dbad567-2c97-49dd-ac90-41fd66a3b606/ovnkube-controller/1.log" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.719420 4805 scope.go:117] "RemoveContainer" containerID="7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191" Dec 03 00:06:50 crc kubenswrapper[4805]: E1203 00:06:50.719681 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-k6pk5_openshift-ovn-kubernetes(2dbad567-2c97-49dd-ac90-41fd66a3b606)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.734372 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.753042 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.762368 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.762410 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.762434 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.762452 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.762466 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:50Z","lastTransitionTime":"2025-12-03T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.773142 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.786003 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.801953 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.816623 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.828694 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.840394 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aeebcab06f1b267bc2e583a2fd8f15328de9c1767ba1d26c75641699f41e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71768f7cf0016d2b7a206f0536cdecd4d112e64661010318877f2601b9ebbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x476n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.852690 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829c74e-7807-4b31-9b2a-2482ec95a235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q4nqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.865550 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.865822 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.865858 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.865868 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.865880 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.865889 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:50Z","lastTransitionTime":"2025-12-03T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.884485 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.894436 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.907222 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.924405 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\" 6246 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 00:06:48.463629 6246 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1203 00:06:48.463633 6246 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1203 00:06:48.463637 6246 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 00:06:48.463656 6246 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-29wnh\\\\nI1203 00:06:48.463666 6246 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-29wnh\\\\nF1203 00:06:48.463664 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-k6pk5_openshift-ovn-kubernetes(2dbad567-2c97-49dd-ac90-41fd66a3b606)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.936618 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.947232 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.961971 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.968963 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.969001 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.969014 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.969028 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:50 crc kubenswrapper[4805]: I1203 00:06:50.969039 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:50Z","lastTransitionTime":"2025-12-03T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.071392 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.071462 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.071485 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.071512 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.071531 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:51Z","lastTransitionTime":"2025-12-03T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.174491 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.174571 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.174595 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.174629 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.174649 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:51Z","lastTransitionTime":"2025-12-03T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.277714 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.277760 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.277774 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.277792 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.277804 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:51Z","lastTransitionTime":"2025-12-03T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.380948 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.380987 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.380998 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.381013 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.381024 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:51Z","lastTransitionTime":"2025-12-03T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.422487 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:06:51 crc kubenswrapper[4805]: E1203 00:06:51.422636 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.453050 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs\") pod \"network-metrics-daemon-q4nqx\" (UID: \"3829c74e-7807-4b31-9b2a-2482ec95a235\") " pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:06:51 crc kubenswrapper[4805]: E1203 00:06:51.453253 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:06:51 crc kubenswrapper[4805]: E1203 00:06:51.453328 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs podName:3829c74e-7807-4b31-9b2a-2482ec95a235 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:53.453304929 +0000 UTC m=+37.302267555 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs") pod "network-metrics-daemon-q4nqx" (UID: "3829c74e-7807-4b31-9b2a-2482ec95a235") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.484652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.484704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.484719 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.484737 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.484747 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:51Z","lastTransitionTime":"2025-12-03T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.587627 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.587670 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.587679 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.587694 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.587704 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:51Z","lastTransitionTime":"2025-12-03T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.690474 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.690567 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.690582 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.690601 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.690620 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:51Z","lastTransitionTime":"2025-12-03T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.794131 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.794191 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.794228 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.794273 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.794287 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:51Z","lastTransitionTime":"2025-12-03T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.896637 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.896684 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.896696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.896713 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.896724 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:51Z","lastTransitionTime":"2025-12-03T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.999722 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.999763 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:51 crc kubenswrapper[4805]: I1203 00:06:51.999777 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:51.999794 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:51.999806 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:51Z","lastTransitionTime":"2025-12-03T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.103041 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.103092 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.103104 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.103124 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.103137 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.205637 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.205679 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.205688 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.205703 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.205712 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.308438 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.308487 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.308498 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.308516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.308527 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.411038 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.411106 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.411125 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.411151 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.411171 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.423334 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.423372 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.423335 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:52 crc kubenswrapper[4805]: E1203 00:06:52.423482 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:52 crc kubenswrapper[4805]: E1203 00:06:52.423583 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:52 crc kubenswrapper[4805]: E1203 00:06:52.423746 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.513974 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.514018 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.514027 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.514043 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.514053 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.616159 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.616216 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.616226 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.616239 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.616250 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.719381 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.719425 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.719437 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.719453 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.719466 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.822944 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.823030 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.823048 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.823074 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.823094 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.926270 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.926427 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.926444 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.926460 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:52 crc kubenswrapper[4805]: I1203 00:06:52.926473 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:52Z","lastTransitionTime":"2025-12-03T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.030060 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.030124 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.030152 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.030185 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.030267 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:53Z","lastTransitionTime":"2025-12-03T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.132705 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.132743 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.132754 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.132770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.132779 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:53Z","lastTransitionTime":"2025-12-03T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.235754 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.235785 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.235795 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.235812 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.235824 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:53Z","lastTransitionTime":"2025-12-03T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.338490 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.338526 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.338534 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.338547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.338557 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:53Z","lastTransitionTime":"2025-12-03T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.422872 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:06:53 crc kubenswrapper[4805]: E1203 00:06:53.423016 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.440652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.440684 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.440694 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.440709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.440720 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:53Z","lastTransitionTime":"2025-12-03T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.469313 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs\") pod \"network-metrics-daemon-q4nqx\" (UID: \"3829c74e-7807-4b31-9b2a-2482ec95a235\") " pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:06:53 crc kubenswrapper[4805]: E1203 00:06:53.469494 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:06:53 crc kubenswrapper[4805]: E1203 00:06:53.469562 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs podName:3829c74e-7807-4b31-9b2a-2482ec95a235 nodeName:}" failed. No retries permitted until 2025-12-03 00:06:57.469541542 +0000 UTC m=+41.318504148 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs") pod "network-metrics-daemon-q4nqx" (UID: "3829c74e-7807-4b31-9b2a-2482ec95a235") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.542718 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.542758 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.542770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.542784 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.542794 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:53Z","lastTransitionTime":"2025-12-03T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.645313 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.645348 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.645358 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.645373 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.645381 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:53Z","lastTransitionTime":"2025-12-03T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.747409 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.747454 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.747464 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.747479 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.747490 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:53Z","lastTransitionTime":"2025-12-03T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.813869 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.828033 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.840013 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.850765 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.850803 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.850812 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.850830 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.850842 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:53Z","lastTransitionTime":"2025-12-03T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.851658 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.868472 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\" 6246 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 00:06:48.463629 6246 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1203 00:06:48.463633 6246 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1203 00:06:48.463637 6246 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 00:06:48.463656 6246 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-29wnh\\\\nI1203 00:06:48.463666 6246 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-29wnh\\\\nF1203 00:06:48.463664 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-k6pk5_openshift-ovn-kubernetes(2dbad567-2c97-49dd-ac90-41fd66a3b606)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.880442 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.891868 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.904260 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.914153 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.923515 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.932942 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aeebcab06f1b267bc2e583a2fd8f15328de9c1767ba1d26c75641699f41e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71768f7cf0016d2b7a206f0536cdecd4d112e64661010318877f2601b9ebbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x476n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.940980 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829c74e-7807-4b31-9b2a-2482ec95a235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q4nqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.951747 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.953739 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.953776 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.953791 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.953808 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.953821 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:53Z","lastTransitionTime":"2025-12-03T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.964089 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.975926 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:53 crc kubenswrapper[4805]: I1203 00:06:53.994388 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.004443 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.017301 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.055651 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.055686 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.055696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.055709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.055717 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:54Z","lastTransitionTime":"2025-12-03T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.157711 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.157761 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.157771 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.157789 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.157802 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:54Z","lastTransitionTime":"2025-12-03T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.260889 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.260929 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.260941 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.260955 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.260967 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:54Z","lastTransitionTime":"2025-12-03T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.363658 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.363724 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.363737 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.363758 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.363770 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:54Z","lastTransitionTime":"2025-12-03T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.423516 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.423555 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.423673 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:54 crc kubenswrapper[4805]: E1203 00:06:54.423814 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:54 crc kubenswrapper[4805]: E1203 00:06:54.424352 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:54 crc kubenswrapper[4805]: E1203 00:06:54.424508 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.467210 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.467255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.467268 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.467290 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.467306 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:54Z","lastTransitionTime":"2025-12-03T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.569591 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.569643 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.569659 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.569679 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.569690 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:54Z","lastTransitionTime":"2025-12-03T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.672087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.672125 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.672137 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.672153 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.672163 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:54Z","lastTransitionTime":"2025-12-03T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.775140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.775359 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.775374 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.775395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.775412 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:54Z","lastTransitionTime":"2025-12-03T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.878696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.878811 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.878840 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.878878 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.878903 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:54Z","lastTransitionTime":"2025-12-03T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.982290 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.982378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.982399 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.982459 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:54 crc kubenswrapper[4805]: I1203 00:06:54.982481 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:54Z","lastTransitionTime":"2025-12-03T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.085273 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.085324 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.085336 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.085352 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.085362 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:55Z","lastTransitionTime":"2025-12-03T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.187040 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.187088 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.187116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.187130 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.187141 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:55Z","lastTransitionTime":"2025-12-03T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.289522 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.289564 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.289573 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.289588 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.289598 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:55Z","lastTransitionTime":"2025-12-03T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.392156 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.392218 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.392229 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.392244 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.392255 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:55Z","lastTransitionTime":"2025-12-03T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.422835 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:06:55 crc kubenswrapper[4805]: E1203 00:06:55.423094 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.494463 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.494509 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.494518 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.494536 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.494546 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:55Z","lastTransitionTime":"2025-12-03T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.596456 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.596502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.596511 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.596525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.596536 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:55Z","lastTransitionTime":"2025-12-03T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.699242 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.699370 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.699386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.699402 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.699412 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:55Z","lastTransitionTime":"2025-12-03T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.801430 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.801465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.801484 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.801498 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.801507 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:55Z","lastTransitionTime":"2025-12-03T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.904093 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.904131 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.904145 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.904183 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:55 crc kubenswrapper[4805]: I1203 00:06:55.904210 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:55Z","lastTransitionTime":"2025-12-03T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.007245 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.007291 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.007304 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.007319 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.007331 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:56Z","lastTransitionTime":"2025-12-03T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.110692 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.110764 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.110787 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.110821 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.110845 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:56Z","lastTransitionTime":"2025-12-03T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.213133 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.213234 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.213259 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.213287 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.213307 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:56Z","lastTransitionTime":"2025-12-03T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.316430 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.316471 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.316484 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.316501 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.316512 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:56Z","lastTransitionTime":"2025-12-03T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.419529 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.419588 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.419604 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.419628 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.419647 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:56Z","lastTransitionTime":"2025-12-03T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.422899 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.422968 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:56 crc kubenswrapper[4805]: E1203 00:06:56.423000 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.422899 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:56 crc kubenswrapper[4805]: E1203 00:06:56.423112 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:56 crc kubenswrapper[4805]: E1203 00:06:56.423190 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.436445 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.472736 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.520449 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.521424 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.521533 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.521621 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.521685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.521747 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:56Z","lastTransitionTime":"2025-12-03T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.533363 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.546540 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.573973 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\" 6246 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 00:06:48.463629 6246 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1203 00:06:48.463633 6246 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1203 00:06:48.463637 6246 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 00:06:48.463656 6246 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-29wnh\\\\nI1203 00:06:48.463666 6246 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-29wnh\\\\nF1203 00:06:48.463664 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-k6pk5_openshift-ovn-kubernetes(2dbad567-2c97-49dd-ac90-41fd66a3b606)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.590097 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.608412 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.622428 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.624670 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.624697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.624705 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.624722 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.624732 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:56Z","lastTransitionTime":"2025-12-03T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.633705 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.646277 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.657250 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.667611 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.678279 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.689935 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.699943 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aeebcab06f1b267bc2e583a2fd8f15328de9c1767ba1d26c75641699f41e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71768f7cf0016d2b7a206f0536cdecd4d112e64661010318877f2601b9ebbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x476n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.710695 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829c74e-7807-4b31-9b2a-2482ec95a235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q4nqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.727736 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.727782 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.727792 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.727807 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.727818 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:56Z","lastTransitionTime":"2025-12-03T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.830720 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.830759 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.830776 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.830797 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.830814 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:56Z","lastTransitionTime":"2025-12-03T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.932873 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.932950 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.932974 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.933002 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:56 crc kubenswrapper[4805]: I1203 00:06:56.933026 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:56Z","lastTransitionTime":"2025-12-03T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.035655 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.035693 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.035702 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.035715 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.035724 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:57Z","lastTransitionTime":"2025-12-03T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.138343 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.138389 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.138402 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.138420 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.138435 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:57Z","lastTransitionTime":"2025-12-03T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.241517 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.241569 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.241583 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.241600 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.241612 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:57Z","lastTransitionTime":"2025-12-03T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.344317 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.344597 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.344677 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.344775 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.344853 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:57Z","lastTransitionTime":"2025-12-03T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.422787 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:06:57 crc kubenswrapper[4805]: E1203 00:06:57.422983 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.447492 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.447549 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.447560 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.447579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.447591 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:57Z","lastTransitionTime":"2025-12-03T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.513260 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs\") pod \"network-metrics-daemon-q4nqx\" (UID: \"3829c74e-7807-4b31-9b2a-2482ec95a235\") " pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:06:57 crc kubenswrapper[4805]: E1203 00:06:57.513472 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:06:57 crc kubenswrapper[4805]: E1203 00:06:57.513539 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs podName:3829c74e-7807-4b31-9b2a-2482ec95a235 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:05.513513741 +0000 UTC m=+49.362476367 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs") pod "network-metrics-daemon-q4nqx" (UID: "3829c74e-7807-4b31-9b2a-2482ec95a235") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.550227 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.550279 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.550292 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.550310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.550325 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:57Z","lastTransitionTime":"2025-12-03T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.652480 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.652788 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.652899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.652992 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.653084 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:57Z","lastTransitionTime":"2025-12-03T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.755532 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.755858 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.755965 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.756076 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.756156 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:57Z","lastTransitionTime":"2025-12-03T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.858746 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.859382 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.859452 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.859519 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.859607 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:57Z","lastTransitionTime":"2025-12-03T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.961951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.961985 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.961994 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.962010 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:57 crc kubenswrapper[4805]: I1203 00:06:57.962019 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:57Z","lastTransitionTime":"2025-12-03T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.065092 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.065136 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.065149 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.065165 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.065177 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.167448 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.167501 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.167513 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.167543 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.167557 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.270274 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.270303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.270311 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.270324 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.270336 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.372768 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.372819 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.372833 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.372849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.372860 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.422902 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.422935 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.423024 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:06:58 crc kubenswrapper[4805]: E1203 00:06:58.423268 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:06:58 crc kubenswrapper[4805]: E1203 00:06:58.424005 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:06:58 crc kubenswrapper[4805]: E1203 00:06:58.424157 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.437739 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.437800 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.437818 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.437843 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.437870 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4805]: E1203 00:06:58.452636 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.458466 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.458524 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.458545 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.458677 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.458697 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4805]: E1203 00:06:58.475667 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.482049 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.482103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.482123 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.482149 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.482166 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4805]: E1203 00:06:58.499937 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.503980 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.504019 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.504029 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.504049 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.504059 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4805]: E1203 00:06:58.517060 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.521236 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.521297 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.521321 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.521349 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.521375 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4805]: E1203 00:06:58.536670 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 03 00:06:58 crc kubenswrapper[4805]: E1203 00:06:58.536846 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.538705 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.538737 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.538748 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.538768 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.538781 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.641709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.641763 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.641778 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.641798 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.641811 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.745001 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.745064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.745081 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.745105 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.745123 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.848687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.848751 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.848768 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.848795 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.848813 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.952157 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.952309 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.952328 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.952353 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:58 crc kubenswrapper[4805]: I1203 00:06:58.952371 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:58Z","lastTransitionTime":"2025-12-03T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.055491 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.055572 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.055599 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.055632 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.055656 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:59Z","lastTransitionTime":"2025-12-03T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.158832 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.158906 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.158923 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.158944 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.158961 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:59Z","lastTransitionTime":"2025-12-03T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.262259 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.262322 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.262337 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.262359 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.262374 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:59Z","lastTransitionTime":"2025-12-03T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.364595 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.364676 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.364703 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.364746 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.364773 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:59Z","lastTransitionTime":"2025-12-03T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.422967 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:06:59 crc kubenswrapper[4805]: E1203 00:06:59.423293 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.467763 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.467843 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.467861 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.467893 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.467914 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:59Z","lastTransitionTime":"2025-12-03T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.570405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.570457 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.570469 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.570484 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.570495 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:59Z","lastTransitionTime":"2025-12-03T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.672941 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.673000 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.673013 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.673030 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.673044 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:59Z","lastTransitionTime":"2025-12-03T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.776128 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.776180 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.776192 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.776236 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.776253 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:59Z","lastTransitionTime":"2025-12-03T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.878561 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.878635 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.878655 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.878721 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.878740 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:59Z","lastTransitionTime":"2025-12-03T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.981189 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.981303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.981327 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.981356 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:06:59 crc kubenswrapper[4805]: I1203 00:06:59.981384 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:06:59Z","lastTransitionTime":"2025-12-03T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.083073 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.083103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.083111 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.083123 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.083132 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:00Z","lastTransitionTime":"2025-12-03T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.185442 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.185493 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.185505 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.185520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.185532 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:00Z","lastTransitionTime":"2025-12-03T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.287519 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.287554 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.287564 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.287576 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.287586 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:00Z","lastTransitionTime":"2025-12-03T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.391171 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.391222 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.391234 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.391247 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.391258 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:00Z","lastTransitionTime":"2025-12-03T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.422939 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.422995 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.422946 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:00 crc kubenswrapper[4805]: E1203 00:07:00.423088 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:00 crc kubenswrapper[4805]: E1203 00:07:00.423229 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:00 crc kubenswrapper[4805]: E1203 00:07:00.423310 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.493984 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.494031 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.494042 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.494063 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.494075 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:00Z","lastTransitionTime":"2025-12-03T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.596808 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.596853 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.596863 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.596878 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.596889 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:00Z","lastTransitionTime":"2025-12-03T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.699450 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.699500 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.699516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.699545 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.699562 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:00Z","lastTransitionTime":"2025-12-03T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.802164 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.802211 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.802220 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.802233 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.802245 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:00Z","lastTransitionTime":"2025-12-03T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.905533 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.905590 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.905603 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.905620 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:00 crc kubenswrapper[4805]: I1203 00:07:00.905635 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:00Z","lastTransitionTime":"2025-12-03T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.008505 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.008549 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.008558 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.008576 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.008586 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:01Z","lastTransitionTime":"2025-12-03T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.111949 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.112025 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.112051 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.112082 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.112109 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:01Z","lastTransitionTime":"2025-12-03T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.215235 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.215292 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.215307 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.215327 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.215340 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:01Z","lastTransitionTime":"2025-12-03T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.317790 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.317826 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.317834 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.317849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.317860 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:01Z","lastTransitionTime":"2025-12-03T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.420978 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.421030 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.421042 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.421060 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.421074 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:01Z","lastTransitionTime":"2025-12-03T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.423273 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:01 crc kubenswrapper[4805]: E1203 00:07:01.423432 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.523545 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.523619 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.523634 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.523656 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.523673 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:01Z","lastTransitionTime":"2025-12-03T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.626343 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.626395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.626403 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.626418 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.626430 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:01Z","lastTransitionTime":"2025-12-03T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.729443 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.729483 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.729498 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.729513 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.729523 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:01Z","lastTransitionTime":"2025-12-03T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.833109 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.833184 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.833233 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.833260 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.833278 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:01Z","lastTransitionTime":"2025-12-03T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.936540 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.936591 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.936604 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.936624 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:01 crc kubenswrapper[4805]: I1203 00:07:01.936637 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:01Z","lastTransitionTime":"2025-12-03T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.040327 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.040410 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.040441 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.040474 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.040498 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.144190 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.144304 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.144324 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.144355 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.144375 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.247978 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.248048 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.248066 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.248091 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.248110 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.340638 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.351272 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.351975 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.352042 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.352064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.352096 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.352119 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.353591 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.371929 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.391835 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.404325 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.423173 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.423309 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.423426 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.423532 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:02 crc kubenswrapper[4805]: E1203 00:07:02.423573 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:02 crc kubenswrapper[4805]: E1203 00:07:02.423805 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:02 crc kubenswrapper[4805]: E1203 00:07:02.423909 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.438902 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.451765 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.454843 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.454877 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.454889 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.454909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.454924 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.469057 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aeebcab06f1b267bc2e583a2fd8f15328de9c1767ba1d26c75641699f41e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71768f7cf0016d2b7a206f0536cdecd4d112e64661010318877f2601b9ebbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x476n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.483965 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829c74e-7807-4b31-9b2a-2482ec95a235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q4nqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.501721 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.524151 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.540513 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.557721 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.557772 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.557785 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.557802 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.557815 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.560769 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.584005 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\" 6246 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 00:06:48.463629 6246 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1203 00:06:48.463633 6246 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1203 00:06:48.463637 6246 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 00:06:48.463656 6246 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-29wnh\\\\nI1203 00:06:48.463666 6246 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-29wnh\\\\nF1203 00:06:48.463664 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-k6pk5_openshift-ovn-kubernetes(2dbad567-2c97-49dd-ac90-41fd66a3b606)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.602259 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.617407 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.632410 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:02Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.661162 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.661252 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.661272 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.661292 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.661305 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.763967 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.764034 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.764047 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.764065 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.764078 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.867838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.867946 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.867971 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.868002 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.868027 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.972006 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.972055 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.972071 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.972090 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:02 crc kubenswrapper[4805]: I1203 00:07:02.972101 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:02Z","lastTransitionTime":"2025-12-03T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.075923 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.076029 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.076613 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.076771 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.076989 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:03Z","lastTransitionTime":"2025-12-03T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.186529 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.186577 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.186585 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.186600 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.186611 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:03Z","lastTransitionTime":"2025-12-03T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.289638 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.289682 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.289691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.289704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.289715 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:03Z","lastTransitionTime":"2025-12-03T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.392427 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.392462 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.392471 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.392484 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.392494 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:03Z","lastTransitionTime":"2025-12-03T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.422272 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:03 crc kubenswrapper[4805]: E1203 00:07:03.422407 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.495274 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.495314 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.495333 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.495356 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.495370 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:03Z","lastTransitionTime":"2025-12-03T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.598695 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.598747 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.598761 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.598783 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.598796 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:03Z","lastTransitionTime":"2025-12-03T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.702572 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.702627 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.702640 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.702658 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.702670 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:03Z","lastTransitionTime":"2025-12-03T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.805653 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.805723 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.805739 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.805762 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.805777 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:03Z","lastTransitionTime":"2025-12-03T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.908434 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.908501 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.908511 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.908528 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:03 crc kubenswrapper[4805]: I1203 00:07:03.908541 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:03Z","lastTransitionTime":"2025-12-03T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.012141 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.012259 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.012278 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.012298 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.012312 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:04Z","lastTransitionTime":"2025-12-03T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.115861 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.115956 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.115979 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.116014 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.116037 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:04Z","lastTransitionTime":"2025-12-03T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.219230 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.219534 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.219567 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.219587 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.219602 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:04Z","lastTransitionTime":"2025-12-03T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.322136 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.322184 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.322213 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.322230 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.322240 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:04Z","lastTransitionTime":"2025-12-03T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.423072 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.423174 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:04 crc kubenswrapper[4805]: E1203 00:07:04.423255 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.423169 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:04 crc kubenswrapper[4805]: E1203 00:07:04.423390 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:04 crc kubenswrapper[4805]: E1203 00:07:04.423677 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.425178 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.425260 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.425274 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.425297 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.425312 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:04Z","lastTransitionTime":"2025-12-03T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.528856 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.528915 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.528927 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.528944 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.528958 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:04Z","lastTransitionTime":"2025-12-03T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.631541 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.631579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.631589 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.631602 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.631612 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:04Z","lastTransitionTime":"2025-12-03T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.733804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.733850 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.733860 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.733877 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.733887 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:04Z","lastTransitionTime":"2025-12-03T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.836865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.836907 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.836922 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.836939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.836949 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:04Z","lastTransitionTime":"2025-12-03T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.939700 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.940021 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.940096 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.940183 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:04 crc kubenswrapper[4805]: I1203 00:07:04.940349 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:04Z","lastTransitionTime":"2025-12-03T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.042761 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.042796 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.042805 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.042820 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.042831 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:05Z","lastTransitionTime":"2025-12-03T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.145632 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.145676 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.145686 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.145701 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.145710 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:05Z","lastTransitionTime":"2025-12-03T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.248361 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.248642 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.248725 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.248821 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.248904 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:05Z","lastTransitionTime":"2025-12-03T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.352486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.352554 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.352567 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.352589 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.352603 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:05Z","lastTransitionTime":"2025-12-03T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.422511 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:05 crc kubenswrapper[4805]: E1203 00:07:05.422664 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.455286 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.455330 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.455338 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.455354 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.455364 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:05Z","lastTransitionTime":"2025-12-03T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.557360 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.557393 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.557401 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.557413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.557424 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:05Z","lastTransitionTime":"2025-12-03T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.614212 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs\") pod \"network-metrics-daemon-q4nqx\" (UID: \"3829c74e-7807-4b31-9b2a-2482ec95a235\") " pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:05 crc kubenswrapper[4805]: E1203 00:07:05.614495 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:05 crc kubenswrapper[4805]: E1203 00:07:05.614693 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs podName:3829c74e-7807-4b31-9b2a-2482ec95a235 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:21.614666358 +0000 UTC m=+65.463628964 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs") pod "network-metrics-daemon-q4nqx" (UID: "3829c74e-7807-4b31-9b2a-2482ec95a235") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.660034 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.660463 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.660525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.660551 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.660569 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:05Z","lastTransitionTime":"2025-12-03T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.763871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.763933 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.763945 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.763965 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.763980 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:05Z","lastTransitionTime":"2025-12-03T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.866982 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.867060 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.867071 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.867085 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.867097 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:05Z","lastTransitionTime":"2025-12-03T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.971341 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.971385 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.971398 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.971416 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:05 crc kubenswrapper[4805]: I1203 00:07:05.971427 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:05Z","lastTransitionTime":"2025-12-03T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.074774 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.074809 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.074836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.074850 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.074858 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:06Z","lastTransitionTime":"2025-12-03T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.178001 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.178047 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.178055 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.178073 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.178084 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:06Z","lastTransitionTime":"2025-12-03T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.221606 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.221697 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.221731 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:06 crc kubenswrapper[4805]: E1203 00:07:06.221906 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:07:06 crc kubenswrapper[4805]: E1203 00:07:06.222008 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:07:38.221952895 +0000 UTC m=+82.070915541 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:07:06 crc kubenswrapper[4805]: E1203 00:07:06.221926 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:07:06 crc kubenswrapper[4805]: E1203 00:07:06.222074 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:38.222056058 +0000 UTC m=+82.071018704 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:07:06 crc kubenswrapper[4805]: E1203 00:07:06.222131 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:38.222106219 +0000 UTC m=+82.071068825 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.280267 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.280302 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.280311 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.280324 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.280333 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:06Z","lastTransitionTime":"2025-12-03T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.323480 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.323610 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:06 crc kubenswrapper[4805]: E1203 00:07:06.323709 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[4805]: E1203 00:07:06.323745 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[4805]: E1203 00:07:06.323759 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:06 crc kubenswrapper[4805]: E1203 00:07:06.323804 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[4805]: E1203 00:07:06.323832 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:07:06 crc kubenswrapper[4805]: E1203 00:07:06.323852 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:06 crc kubenswrapper[4805]: E1203 00:07:06.323872 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:38.323843969 +0000 UTC m=+82.172806585 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:06 crc kubenswrapper[4805]: E1203 00:07:06.323932 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:38.323903221 +0000 UTC m=+82.172865867 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.383939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.384030 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.384052 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.384079 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.384126 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:06Z","lastTransitionTime":"2025-12-03T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.422829 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.422845 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:06 crc kubenswrapper[4805]: E1203 00:07:06.422984 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.423007 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:06 crc kubenswrapper[4805]: E1203 00:07:06.423583 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:06 crc kubenswrapper[4805]: E1203 00:07:06.423950 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.424296 4805 scope.go:117] "RemoveContainer" containerID="7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.442568 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.463112 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.475608 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.489428 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.490099 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.490173 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.490530 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.490603 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.490618 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:06Z","lastTransitionTime":"2025-12-03T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.509514 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.521946 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6265ed86-69ce-4136-85a8-9b7e3801d5bf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10721a21c16e0eb75e863b08e07a09c628f5e1f6ae17d60c3c65010188d04ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c67b169b1480f166cc6166b80f1371fda8965fcd58b25153a1d3befd74af36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e723695c2c26ea7f587aada1b84da8caf1fba5ebbaf8b5665de0c33f8c0d02ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.533533 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.544709 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.555746 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.566951 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aeebcab06f1b267bc2e583a2fd8f15328de9c1767ba1d26c75641699f41e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71768f7cf0016d2b7a206f0536cdecd4d112e64661010318877f2601b9ebbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x476n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.578143 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829c74e-7807-4b31-9b2a-2482ec95a235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q4nqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.593441 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.593479 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.593488 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.593506 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.593519 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:06Z","lastTransitionTime":"2025-12-03T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.593836 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.609596 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.627725 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.642232 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.657917 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.681236 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\" 6246 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 00:06:48.463629 6246 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1203 00:06:48.463633 6246 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1203 00:06:48.463637 6246 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 00:06:48.463656 6246 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-29wnh\\\\nI1203 00:06:48.463666 6246 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-29wnh\\\\nF1203 00:06:48.463664 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-k6pk5_openshift-ovn-kubernetes(2dbad567-2c97-49dd-ac90-41fd66a3b606)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.697126 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.697173 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.697186 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.697225 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.697240 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:06Z","lastTransitionTime":"2025-12-03T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.703771 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.782360 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6pk5_2dbad567-2c97-49dd-ac90-41fd66a3b606/ovnkube-controller/1.log" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.787835 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerStarted","Data":"226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3"} Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.788506 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.806904 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.807318 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.807345 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.807355 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.807369 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.807379 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:06Z","lastTransitionTime":"2025-12-03T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.820032 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.831185 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.843128 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.854825 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.867231 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6265ed86-69ce-4136-85a8-9b7e3801d5bf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10721a21c16e0eb75e863b08e07a09c628f5e1f6ae17d60c3c65010188d04ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c67b169b1480f166cc6166b80f1371fda8965fcd58b25153a1d3befd74af36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e723695c2c26ea7f587aada1b84da8caf1fba5ebbaf8b5665de0c33f8c0d02ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.880450 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.890727 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.900726 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.910477 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.910509 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.910518 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.910534 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.910545 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:06Z","lastTransitionTime":"2025-12-03T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.919425 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aeebcab06f1b267bc2e583a2fd8f15328de9c1767ba1d26c75641699f41e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71768f7cf0016d2b7a206f0536cdecd4d112e64661010318877f2601b9ebbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x476n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.932537 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829c74e-7807-4b31-9b2a-2482ec95a235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q4nqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.944240 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.965158 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:06 crc kubenswrapper[4805]: I1203 00:07:06.988184 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:06Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.001944 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.014022 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.014115 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.014126 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.014140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.014149 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:07Z","lastTransitionTime":"2025-12-03T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.014742 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.037971 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\" 6246 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 00:06:48.463629 6246 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1203 00:06:48.463633 6246 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1203 00:06:48.463637 6246 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 00:06:48.463656 6246 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-29wnh\\\\nI1203 00:06:48.463666 6246 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-29wnh\\\\nF1203 00:06:48.463664 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.051359 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.116769 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.116813 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.116822 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.116838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.116847 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:07Z","lastTransitionTime":"2025-12-03T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.219993 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.220054 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.220065 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.220081 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.220092 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:07Z","lastTransitionTime":"2025-12-03T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.322307 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.322366 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.322375 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.322393 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.322407 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:07Z","lastTransitionTime":"2025-12-03T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.427496 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:07 crc kubenswrapper[4805]: E1203 00:07:07.427634 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.429056 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.429093 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.429103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.429141 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.429153 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:07Z","lastTransitionTime":"2025-12-03T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.531760 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.531810 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.531820 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.531840 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.531851 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:07Z","lastTransitionTime":"2025-12-03T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.633976 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.634028 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.634039 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.634057 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.634072 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:07Z","lastTransitionTime":"2025-12-03T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.736399 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.736438 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.736448 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.736462 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.736473 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:07Z","lastTransitionTime":"2025-12-03T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.793019 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6pk5_2dbad567-2c97-49dd-ac90-41fd66a3b606/ovnkube-controller/2.log" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.793538 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6pk5_2dbad567-2c97-49dd-ac90-41fd66a3b606/ovnkube-controller/1.log" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.796909 4805 generic.go:334] "Generic (PLEG): container finished" podID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerID="226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3" exitCode=1 Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.796984 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerDied","Data":"226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3"} Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.797054 4805 scope.go:117] "RemoveContainer" containerID="7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.797553 4805 scope.go:117] "RemoveContainer" containerID="226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3" Dec 03 00:07:07 crc kubenswrapper[4805]: E1203 00:07:07.797740 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k6pk5_openshift-ovn-kubernetes(2dbad567-2c97-49dd-ac90-41fd66a3b606)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.813834 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.839543 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.839952 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.839969 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.839996 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.840015 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:07Z","lastTransitionTime":"2025-12-03T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.847382 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7337578233acfc26d3ff7eaa5c6f95c83c2a1a4f5fc8e02a87118d42f2e13191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"message\\\":\\\" 6246 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 00:06:48.463629 6246 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1203 00:06:48.463633 6246 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1203 00:06:48.463637 6246 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 00:06:48.463656 6246 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-29wnh\\\\nI1203 00:06:48.463666 6246 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-29wnh\\\\nF1203 00:06:48.463664 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:07Z\\\",\\\"message\\\":\\\"/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:07:07.360788 6476 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:07:07.370348 6476 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 00:07:07.370442 6476 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 00:07:07.370475 6476 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 00:07:07.370483 6476 factory.go:656] Stopping watch factory\\\\nI1203 00:07:07.370504 6476 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 00:07:07.370574 6476 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:07:07.370732 6476 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 00:07:07.373287 6476 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1203 00:07:07.373320 6476 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1203 00:07:07.373397 6476 ovnkube.go:599] Stopped ovnkube\\\\nI1203 00:07:07.373443 6476 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 00:07:07.373536 6476 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.867483 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.881182 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.895163 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.905907 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.919405 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.931599 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.943691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.943827 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.943904 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.943972 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.943796 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6265ed86-69ce-4136-85a8-9b7e3801d5bf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10721a21c16e0eb75e863b08e07a09c628f5e1f6ae17d60c3c65010188d04ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c67b169b1480f166cc6166b80f1371fda8965fcd58b25153a1d3befd74af36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e723695c2c26ea7f587aada1b84da8caf1fba5ebbaf8b5665de0c33f8c0d02ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.944038 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:07Z","lastTransitionTime":"2025-12-03T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.956063 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.967673 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.976629 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:07 crc kubenswrapper[4805]: I1203 00:07:07.989568 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aeebcab06f1b267bc2e583a2fd8f15328de9c1767ba1d26c75641699f41e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71768f7cf0016d2b7a206f0536cdecd4d112e64661010318877f2601b9ebbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x476n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.000532 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829c74e-7807-4b31-9b2a-2482ec95a235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q4nqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:07Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.013487 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:08Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.028401 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:08Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.046666 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.046724 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.046737 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.046753 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.046765 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.050104 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:08Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.060949 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:08Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.149504 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.149538 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.149547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.149559 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.149570 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.252359 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.252408 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.252415 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.252430 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.252457 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.354531 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.354563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.354572 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.354584 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.354593 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.423613 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.423613 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:08 crc kubenswrapper[4805]: E1203 00:07:08.423798 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.423649 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:08 crc kubenswrapper[4805]: E1203 00:07:08.423891 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:08 crc kubenswrapper[4805]: E1203 00:07:08.424183 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.456807 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.456853 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.456862 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.456877 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.456888 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.560268 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.560351 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.560372 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.560405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.560435 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.663722 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.663763 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.663775 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.663803 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.663823 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.668689 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.668727 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.668737 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.668753 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.668766 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4805]: E1203 00:07:08.681361 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:08Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.685254 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.685296 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.685310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.685328 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.685339 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4805]: E1203 00:07:08.697134 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:08Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.701006 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.701045 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.701053 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.701068 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.701079 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4805]: E1203 00:07:08.713185 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:08Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.716714 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.716763 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.716774 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.716790 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.716802 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4805]: E1203 00:07:08.729091 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:08Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.733577 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.733626 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.733637 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.733652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.733663 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4805]: E1203 00:07:08.746255 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:08Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: E1203 00:07:08.746532 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.766367 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.766420 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.766429 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.766445 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.766455 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.802262 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6pk5_2dbad567-2c97-49dd-ac90-41fd66a3b606/ovnkube-controller/2.log" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.806450 4805 scope.go:117] "RemoveContainer" containerID="226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3" Dec 03 00:07:08 crc kubenswrapper[4805]: E1203 00:07:08.806663 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k6pk5_openshift-ovn-kubernetes(2dbad567-2c97-49dd-ac90-41fd66a3b606)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.819480 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:08Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.832537 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:08Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.854101 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:07Z\\\",\\\"message\\\":\\\"/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:07:07.360788 6476 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:07:07.370348 6476 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 00:07:07.370442 6476 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 00:07:07.370475 6476 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 00:07:07.370483 6476 factory.go:656] Stopping watch factory\\\\nI1203 00:07:07.370504 6476 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 00:07:07.370574 6476 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:07:07.370732 6476 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 00:07:07.373287 6476 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1203 00:07:07.373320 6476 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1203 00:07:07.373397 6476 ovnkube.go:599] Stopped ovnkube\\\\nI1203 00:07:07.373443 6476 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 00:07:07.373536 6476 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:07:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k6pk5_openshift-ovn-kubernetes(2dbad567-2c97-49dd-ac90-41fd66a3b606)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:08Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.869546 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.869586 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.869599 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.869615 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.869628 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.869671 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:08Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.886270 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:08Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.900731 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:08Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.913131 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:08Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.927280 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:08Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.942117 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:08Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.955479 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6265ed86-69ce-4136-85a8-9b7e3801d5bf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10721a21c16e0eb75e863b08e07a09c628f5e1f6ae17d60c3c65010188d04ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c67b169b1480f166cc6166b80f1371fda8965fcd58b25153a1d3befd74af36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e723695c2c26ea7f587aada1b84da8caf1fba5ebbaf8b5665de0c33f8c0d02ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:08Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.968728 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:08Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.972118 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.972214 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.972231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.972254 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.972266 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:08Z","lastTransitionTime":"2025-12-03T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.983429 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:08Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:08 crc kubenswrapper[4805]: I1203 00:07:08.995552 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:08Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.007703 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aeebcab06f1b267bc2e583a2fd8f15328de9c1767ba1d26c75641699f41e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71768f7cf0016d2b7a206f0536cdecd4d112e64661010318877f2601b9ebbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x476n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.019119 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829c74e-7807-4b31-9b2a-2482ec95a235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q4nqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.031077 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.047550 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.070446 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.075489 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.075526 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.075536 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.075557 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.075568 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:09Z","lastTransitionTime":"2025-12-03T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.178342 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.178399 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.178414 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.178438 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.178453 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:09Z","lastTransitionTime":"2025-12-03T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.281262 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.281300 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.281309 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.281325 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.281335 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:09Z","lastTransitionTime":"2025-12-03T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.388851 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.388889 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.388898 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.388911 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.388922 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:09Z","lastTransitionTime":"2025-12-03T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.422472 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:09 crc kubenswrapper[4805]: E1203 00:07:09.422614 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.491690 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.491792 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.491803 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.491819 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.491832 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:09Z","lastTransitionTime":"2025-12-03T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.594936 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.594980 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.594992 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.595008 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.595020 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:09Z","lastTransitionTime":"2025-12-03T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.697210 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.697255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.697269 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.697284 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.697296 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:09Z","lastTransitionTime":"2025-12-03T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.800307 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.800342 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.800350 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.800366 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.800377 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:09Z","lastTransitionTime":"2025-12-03T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.903176 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.903276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.903285 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.903298 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:09 crc kubenswrapper[4805]: I1203 00:07:09.903308 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:09Z","lastTransitionTime":"2025-12-03T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.005626 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.005666 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.005676 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.005690 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.005699 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:10Z","lastTransitionTime":"2025-12-03T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.108322 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.108369 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.108386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.108404 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.108416 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:10Z","lastTransitionTime":"2025-12-03T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.211287 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.211325 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.211334 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.211347 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.211357 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:10Z","lastTransitionTime":"2025-12-03T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.313959 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.314025 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.314038 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.314058 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.314071 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:10Z","lastTransitionTime":"2025-12-03T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.416623 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.416658 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.416666 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.416680 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.416690 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:10Z","lastTransitionTime":"2025-12-03T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.423018 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.423143 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.423022 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:10 crc kubenswrapper[4805]: E1203 00:07:10.423309 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:10 crc kubenswrapper[4805]: E1203 00:07:10.423142 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:10 crc kubenswrapper[4805]: E1203 00:07:10.423363 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.518470 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.518532 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.518548 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.518567 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.518584 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:10Z","lastTransitionTime":"2025-12-03T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.621340 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.621388 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.621398 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.621414 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.621424 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:10Z","lastTransitionTime":"2025-12-03T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.724702 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.724761 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.724774 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.724795 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.724809 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:10Z","lastTransitionTime":"2025-12-03T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.827401 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.827450 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.827465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.827485 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.827499 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:10Z","lastTransitionTime":"2025-12-03T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.930156 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.930231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.930245 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.930294 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:10 crc kubenswrapper[4805]: I1203 00:07:10.930311 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:10Z","lastTransitionTime":"2025-12-03T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.033080 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.033117 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.033126 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.033140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.033150 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:11Z","lastTransitionTime":"2025-12-03T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.135620 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.135658 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.135666 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.135682 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.135691 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:11Z","lastTransitionTime":"2025-12-03T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.237667 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.237706 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.237715 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.237728 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.237736 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:11Z","lastTransitionTime":"2025-12-03T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.340504 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.340570 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.340590 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.340614 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.340630 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:11Z","lastTransitionTime":"2025-12-03T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.422496 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:11 crc kubenswrapper[4805]: E1203 00:07:11.422671 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.443886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.443965 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.443990 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.444021 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.444046 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:11Z","lastTransitionTime":"2025-12-03T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.547364 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.547408 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.547417 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.547430 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.547440 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:11Z","lastTransitionTime":"2025-12-03T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.650906 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.650956 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.650968 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.650992 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.651018 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:11Z","lastTransitionTime":"2025-12-03T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.753484 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.753525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.753534 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.753550 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.753564 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:11Z","lastTransitionTime":"2025-12-03T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.856397 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.856451 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.856460 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.856478 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.856488 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:11Z","lastTransitionTime":"2025-12-03T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.959345 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.959397 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.959409 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.959428 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:11 crc kubenswrapper[4805]: I1203 00:07:11.959440 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:11Z","lastTransitionTime":"2025-12-03T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.062095 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.062133 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.062142 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.062155 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.062165 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.164977 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.165070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.165101 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.165143 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.165171 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.268151 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.268256 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.268272 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.268295 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.268316 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.370757 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.370802 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.370812 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.370827 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.370847 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.422683 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.422765 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.422712 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:12 crc kubenswrapper[4805]: E1203 00:07:12.422929 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:12 crc kubenswrapper[4805]: E1203 00:07:12.423023 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:12 crc kubenswrapper[4805]: E1203 00:07:12.423088 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.473846 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.473896 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.473911 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.473929 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.473942 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.576327 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.576392 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.576406 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.576427 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.576441 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.679107 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.679182 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.679214 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.679239 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.679253 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.781804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.781843 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.781853 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.781870 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.781880 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.885251 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.885327 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.885345 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.885405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.885424 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.988515 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.988579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.988598 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.988622 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:12 crc kubenswrapper[4805]: I1203 00:07:12.988640 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:12Z","lastTransitionTime":"2025-12-03T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.093365 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.093462 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.093472 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.093492 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.093502 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:13Z","lastTransitionTime":"2025-12-03T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.200714 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.200780 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.200797 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.200818 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.200844 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:13Z","lastTransitionTime":"2025-12-03T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.303169 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.303258 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.303269 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.303285 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.303296 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:13Z","lastTransitionTime":"2025-12-03T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.406111 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.406231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.406265 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.406299 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.406326 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:13Z","lastTransitionTime":"2025-12-03T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.422391 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:13 crc kubenswrapper[4805]: E1203 00:07:13.422568 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.508552 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.508602 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.508611 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.508625 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.508636 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:13Z","lastTransitionTime":"2025-12-03T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.611312 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.611359 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.611368 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.611383 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.611420 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:13Z","lastTransitionTime":"2025-12-03T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.715141 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.715211 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.715221 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.715238 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.715249 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:13Z","lastTransitionTime":"2025-12-03T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.818450 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.818505 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.818524 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.818558 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.818595 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:13Z","lastTransitionTime":"2025-12-03T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.921260 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.921300 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.921313 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.921328 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:13 crc kubenswrapper[4805]: I1203 00:07:13.921339 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:13Z","lastTransitionTime":"2025-12-03T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.024288 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.024337 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.024348 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.024366 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.024377 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:14Z","lastTransitionTime":"2025-12-03T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.127778 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.127820 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.127830 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.127845 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.127855 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:14Z","lastTransitionTime":"2025-12-03T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.230676 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.230726 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.230742 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.230766 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.230784 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:14Z","lastTransitionTime":"2025-12-03T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.333279 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.333345 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.333360 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.333375 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.333384 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:14Z","lastTransitionTime":"2025-12-03T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.423327 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.423380 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.423474 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:14 crc kubenswrapper[4805]: E1203 00:07:14.423517 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:14 crc kubenswrapper[4805]: E1203 00:07:14.423604 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:14 crc kubenswrapper[4805]: E1203 00:07:14.423675 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.435722 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.435766 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.435786 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.435804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.435821 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:14Z","lastTransitionTime":"2025-12-03T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.539207 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.539243 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.539259 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.539275 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.539285 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:14Z","lastTransitionTime":"2025-12-03T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.642384 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.642455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.642465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.642478 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.642488 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:14Z","lastTransitionTime":"2025-12-03T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.745341 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.745381 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.745395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.745410 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.745423 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:14Z","lastTransitionTime":"2025-12-03T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.848238 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.848324 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.848348 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.848379 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.848408 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:14Z","lastTransitionTime":"2025-12-03T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.951787 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.951849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.951859 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.951872 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:14 crc kubenswrapper[4805]: I1203 00:07:14.951882 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:14Z","lastTransitionTime":"2025-12-03T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.054091 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.054125 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.054133 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.054148 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.054156 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:15Z","lastTransitionTime":"2025-12-03T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.156710 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.156814 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.156823 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.156835 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.156844 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:15Z","lastTransitionTime":"2025-12-03T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.259709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.259802 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.259816 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.259831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.259842 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:15Z","lastTransitionTime":"2025-12-03T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.362633 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.362704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.362714 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.362730 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.362741 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:15Z","lastTransitionTime":"2025-12-03T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.422735 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:15 crc kubenswrapper[4805]: E1203 00:07:15.422982 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.465615 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.465684 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.465707 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.465742 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.465766 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:15Z","lastTransitionTime":"2025-12-03T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.569614 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.569669 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.569682 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.569701 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.569714 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:15Z","lastTransitionTime":"2025-12-03T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.672267 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.672307 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.672316 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.672332 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.672341 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:15Z","lastTransitionTime":"2025-12-03T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.775626 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.775673 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.775689 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.775704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.775715 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:15Z","lastTransitionTime":"2025-12-03T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.881977 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.882043 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.882055 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.882071 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.882087 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:15Z","lastTransitionTime":"2025-12-03T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.985579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.985651 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.985671 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.985701 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:15 crc kubenswrapper[4805]: I1203 00:07:15.985721 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:15Z","lastTransitionTime":"2025-12-03T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.088556 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.088606 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.088618 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.088635 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.088652 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:16Z","lastTransitionTime":"2025-12-03T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.190990 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.191054 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.191064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.191076 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.191088 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:16Z","lastTransitionTime":"2025-12-03T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.293970 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.294016 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.294025 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.294041 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.294051 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:16Z","lastTransitionTime":"2025-12-03T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.396517 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.396601 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.396625 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.396653 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.396672 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:16Z","lastTransitionTime":"2025-12-03T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.422458 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.422517 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.422457 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:16 crc kubenswrapper[4805]: E1203 00:07:16.422642 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:16 crc kubenswrapper[4805]: E1203 00:07:16.422769 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:16 crc kubenswrapper[4805]: E1203 00:07:16.423245 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.456646 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.469363 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.483490 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.498780 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.499231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.499260 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.499270 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.499287 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.499298 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:16Z","lastTransitionTime":"2025-12-03T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.515339 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.528641 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.557708 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:07Z\\\",\\\"message\\\":\\\"/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:07:07.360788 6476 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:07:07.370348 6476 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 00:07:07.370442 6476 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 00:07:07.370475 6476 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 00:07:07.370483 6476 factory.go:656] Stopping watch factory\\\\nI1203 00:07:07.370504 6476 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 00:07:07.370574 6476 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:07:07.370732 6476 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 00:07:07.373287 6476 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1203 00:07:07.373320 6476 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1203 00:07:07.373397 6476 ovnkube.go:599] Stopped ovnkube\\\\nI1203 00:07:07.373443 6476 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 00:07:07.373536 6476 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:07:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k6pk5_openshift-ovn-kubernetes(2dbad567-2c97-49dd-ac90-41fd66a3b606)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.578967 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.598468 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.601837 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.601866 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.601875 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.601887 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.601896 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:16Z","lastTransitionTime":"2025-12-03T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.609743 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.619729 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.629635 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aeebcab06f1b267bc2e583a2fd8f15328de9c1767ba1d26c75641699f41e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71768f7cf0016d2b7a206f0536cdecd4d112e64661010318877f2601b9ebbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x476n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.639135 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829c74e-7807-4b31-9b2a-2482ec95a235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q4nqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.651481 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.663827 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6265ed86-69ce-4136-85a8-9b7e3801d5bf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10721a21c16e0eb75e863b08e07a09c628f5e1f6ae17d60c3c65010188d04ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c67b169b1480f166cc6166b80f1371fda8965fcd58b25153a1d3befd74af36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e723695c2c26ea7f587aada1b84da8caf1fba5ebbaf8b5665de0c33f8c0d02ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.676934 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.689140 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.699906 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.703785 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.703813 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.703825 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.703840 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.703851 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:16Z","lastTransitionTime":"2025-12-03T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.805653 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.805687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.805695 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.805708 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.805717 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:16Z","lastTransitionTime":"2025-12-03T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.908336 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.908369 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.908378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.908393 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:16 crc kubenswrapper[4805]: I1203 00:07:16.908405 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:16Z","lastTransitionTime":"2025-12-03T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.010768 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.010804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.010814 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.010828 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.010836 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:17Z","lastTransitionTime":"2025-12-03T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.113320 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.113379 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.113392 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.113426 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.113438 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:17Z","lastTransitionTime":"2025-12-03T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.217159 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.217218 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.217233 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.217250 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.217264 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:17Z","lastTransitionTime":"2025-12-03T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.319900 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.320269 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.320397 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.320495 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.320590 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:17Z","lastTransitionTime":"2025-12-03T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.422278 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:17 crc kubenswrapper[4805]: E1203 00:07:17.422443 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.423769 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.423812 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.423828 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.423846 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.423860 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:17Z","lastTransitionTime":"2025-12-03T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.526651 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.526699 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.526710 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.526729 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.526741 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:17Z","lastTransitionTime":"2025-12-03T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.629112 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.629334 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.629364 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.629382 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.629394 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:17Z","lastTransitionTime":"2025-12-03T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.732428 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.732479 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.732493 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.732508 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.732520 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:17Z","lastTransitionTime":"2025-12-03T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.834162 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.834258 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.834277 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.834300 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.834321 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:17Z","lastTransitionTime":"2025-12-03T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.936987 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.937310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.937437 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.937596 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:17 crc kubenswrapper[4805]: I1203 00:07:17.937707 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:17Z","lastTransitionTime":"2025-12-03T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.042093 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.042160 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.042182 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.042242 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.042270 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:18Z","lastTransitionTime":"2025-12-03T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.145729 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.146053 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.146178 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.146346 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.146458 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:18Z","lastTransitionTime":"2025-12-03T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.249661 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.249686 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.249695 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.249707 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.249716 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:18Z","lastTransitionTime":"2025-12-03T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.352083 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.352120 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.352128 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.352143 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.352152 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:18Z","lastTransitionTime":"2025-12-03T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.423012 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.423012 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:18 crc kubenswrapper[4805]: E1203 00:07:18.423219 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.423031 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:18 crc kubenswrapper[4805]: E1203 00:07:18.423404 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:18 crc kubenswrapper[4805]: E1203 00:07:18.423485 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.454224 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.454278 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.454292 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.454308 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.454320 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:18Z","lastTransitionTime":"2025-12-03T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.556669 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.556714 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.556831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.556852 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.556865 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:18Z","lastTransitionTime":"2025-12-03T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.659166 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.659247 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.659260 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.659281 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.659294 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:18Z","lastTransitionTime":"2025-12-03T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.762159 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.762264 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.762287 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.762311 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.762329 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:18Z","lastTransitionTime":"2025-12-03T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.864791 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.864836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.864846 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.864865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.864877 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:18Z","lastTransitionTime":"2025-12-03T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.967222 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.967274 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.967286 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.967304 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:18 crc kubenswrapper[4805]: I1203 00:07:18.967314 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:18Z","lastTransitionTime":"2025-12-03T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.066420 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.066471 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.066479 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.066500 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.066511 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4805]: E1203 00:07:19.081655 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.086501 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.086556 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.086574 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.086597 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.086611 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4805]: E1203 00:07:19.099732 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.103705 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.103847 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.103936 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.104016 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.104098 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4805]: E1203 00:07:19.118334 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.123930 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.123980 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.123992 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.124015 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.124026 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4805]: E1203 00:07:19.138183 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.142839 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.142882 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.142896 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.142913 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.142960 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4805]: E1203 00:07:19.158638 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:19Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:19 crc kubenswrapper[4805]: E1203 00:07:19.158940 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.160637 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.160677 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.160690 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.160707 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.160720 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.262766 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.262831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.262856 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.262882 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.262902 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.365697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.365741 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.365752 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.365769 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.365782 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.423315 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:19 crc kubenswrapper[4805]: E1203 00:07:19.423474 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.467995 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.468053 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.468067 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.468085 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.468102 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.571075 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.571142 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.571155 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.571172 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.571185 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.673658 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.673704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.673713 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.673729 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.673740 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.779676 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.780056 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.780147 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.780265 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.780370 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.882958 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.883026 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.883039 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.883055 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.883065 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.985617 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.985660 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.985670 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.985687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:19 crc kubenswrapper[4805]: I1203 00:07:19.985698 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:19Z","lastTransitionTime":"2025-12-03T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.088166 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.088233 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.088246 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.088264 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.088276 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:20Z","lastTransitionTime":"2025-12-03T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.190185 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.190455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.190563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.190654 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.190849 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:20Z","lastTransitionTime":"2025-12-03T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.292868 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.293591 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.293679 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.293757 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.293826 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:20Z","lastTransitionTime":"2025-12-03T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.396838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.396890 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.396903 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.396920 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.396934 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:20Z","lastTransitionTime":"2025-12-03T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.423216 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:20 crc kubenswrapper[4805]: E1203 00:07:20.423427 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.423517 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.423444 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:20 crc kubenswrapper[4805]: E1203 00:07:20.423630 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:20 crc kubenswrapper[4805]: E1203 00:07:20.423924 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.499473 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.499513 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.499521 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.499536 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.499545 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:20Z","lastTransitionTime":"2025-12-03T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.601794 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.601832 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.601844 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.601861 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.601872 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:20Z","lastTransitionTime":"2025-12-03T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.703959 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.704257 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.704349 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.704447 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.704531 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:20Z","lastTransitionTime":"2025-12-03T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.806917 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.806949 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.806957 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.806970 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.806979 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:20Z","lastTransitionTime":"2025-12-03T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.909801 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.909849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.909859 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.909877 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:20 crc kubenswrapper[4805]: I1203 00:07:20.909887 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:20Z","lastTransitionTime":"2025-12-03T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.012566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.012603 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.012611 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.012624 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.012632 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:21Z","lastTransitionTime":"2025-12-03T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.117658 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.117765 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.117790 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.117821 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.117843 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:21Z","lastTransitionTime":"2025-12-03T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.220690 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.220739 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.220750 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.220772 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.220785 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:21Z","lastTransitionTime":"2025-12-03T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.324143 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.324266 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.324291 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.324320 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.324339 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:21Z","lastTransitionTime":"2025-12-03T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.423210 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:21 crc kubenswrapper[4805]: E1203 00:07:21.423366 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.428912 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.428942 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.428952 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.428967 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.428977 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:21Z","lastTransitionTime":"2025-12-03T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.531555 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.531604 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.531613 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.531630 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.531641 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:21Z","lastTransitionTime":"2025-12-03T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.633939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.633998 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.634010 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.634028 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.634038 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:21Z","lastTransitionTime":"2025-12-03T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.683957 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs\") pod \"network-metrics-daemon-q4nqx\" (UID: \"3829c74e-7807-4b31-9b2a-2482ec95a235\") " pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:21 crc kubenswrapper[4805]: E1203 00:07:21.684251 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:21 crc kubenswrapper[4805]: E1203 00:07:21.684372 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs podName:3829c74e-7807-4b31-9b2a-2482ec95a235 nodeName:}" failed. No retries permitted until 2025-12-03 00:07:53.684328606 +0000 UTC m=+97.533291262 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs") pod "network-metrics-daemon-q4nqx" (UID: "3829c74e-7807-4b31-9b2a-2482ec95a235") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.736933 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.737001 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.737024 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.737055 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.737078 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:21Z","lastTransitionTime":"2025-12-03T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.840432 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.840496 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.840507 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.840522 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.840538 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:21Z","lastTransitionTime":"2025-12-03T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.943002 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.943052 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.943063 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.943080 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:21 crc kubenswrapper[4805]: I1203 00:07:21.943098 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:21Z","lastTransitionTime":"2025-12-03T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.045698 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.045766 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.045782 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.045805 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.045821 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.148508 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.148551 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.148563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.148586 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.148599 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.251314 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.251367 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.251388 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.251412 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.251424 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.354002 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.354057 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.354067 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.354084 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.354098 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.422894 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.422906 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.422930 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:22 crc kubenswrapper[4805]: E1203 00:07:22.423311 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:22 crc kubenswrapper[4805]: E1203 00:07:22.423475 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.424105 4805 scope.go:117] "RemoveContainer" containerID="226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3" Dec 03 00:07:22 crc kubenswrapper[4805]: E1203 00:07:22.424329 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:22 crc kubenswrapper[4805]: E1203 00:07:22.424632 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k6pk5_openshift-ovn-kubernetes(2dbad567-2c97-49dd-ac90-41fd66a3b606)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.456881 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.456926 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.456938 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.456957 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.456970 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.559291 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.559341 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.559352 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.559370 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.559384 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.662914 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.662962 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.662972 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.662989 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.663002 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.766032 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.766128 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.766174 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.766246 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.766282 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.868730 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.868827 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.868849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.868934 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.868998 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.972529 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.972639 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.972656 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.972675 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:22 crc kubenswrapper[4805]: I1203 00:07:22.972692 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:22Z","lastTransitionTime":"2025-12-03T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.075592 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.075636 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.075645 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.075661 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.075673 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:23Z","lastTransitionTime":"2025-12-03T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.178316 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.178353 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.178361 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.178375 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.178384 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:23Z","lastTransitionTime":"2025-12-03T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.281939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.282024 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.282043 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.282078 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.282102 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:23Z","lastTransitionTime":"2025-12-03T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.384802 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.384836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.384845 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.384858 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.384867 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:23Z","lastTransitionTime":"2025-12-03T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.423123 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:23 crc kubenswrapper[4805]: E1203 00:07:23.423558 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.488235 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.488291 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.488303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.488326 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.488341 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:23Z","lastTransitionTime":"2025-12-03T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.591109 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.591474 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.591617 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.591707 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.591805 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:23Z","lastTransitionTime":"2025-12-03T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.695185 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.695853 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.696014 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.696099 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.696177 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:23Z","lastTransitionTime":"2025-12-03T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.799140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.799189 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.799224 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.799248 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.799259 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:23Z","lastTransitionTime":"2025-12-03T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.850922 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lllfh_839326a5-41df-492f-83c4-3ee9e2964dc8/kube-multus/0.log" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.850976 4805 generic.go:334] "Generic (PLEG): container finished" podID="839326a5-41df-492f-83c4-3ee9e2964dc8" containerID="861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1" exitCode=1 Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.851008 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lllfh" event={"ID":"839326a5-41df-492f-83c4-3ee9e2964dc8","Type":"ContainerDied","Data":"861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1"} Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.851457 4805 scope.go:117] "RemoveContainer" containerID="861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.872423 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.885028 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.904328 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.904414 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.904428 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.904468 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.904483 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:23Z","lastTransitionTime":"2025-12-03T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.907305 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.926050 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:07Z\\\",\\\"message\\\":\\\"/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:07:07.360788 6476 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:07:07.370348 6476 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 00:07:07.370442 6476 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 00:07:07.370475 6476 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 00:07:07.370483 6476 factory.go:656] Stopping watch factory\\\\nI1203 00:07:07.370504 6476 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 00:07:07.370574 6476 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:07:07.370732 6476 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 00:07:07.373287 6476 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1203 00:07:07.373320 6476 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1203 00:07:07.373397 6476 ovnkube.go:599] Stopped ovnkube\\\\nI1203 00:07:07.373443 6476 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 00:07:07.373536 6476 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:07:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k6pk5_openshift-ovn-kubernetes(2dbad567-2c97-49dd-ac90-41fd66a3b606)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.941619 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.957111 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.973271 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:23Z\\\",\\\"message\\\":\\\"2025-12-03T00:06:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b185583-eef9-44d6-bd2e-697ba2a0e55c\\\\n2025-12-03T00:06:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b185583-eef9-44d6-bd2e-697ba2a0e55c to /host/opt/cni/bin/\\\\n2025-12-03T00:06:38Z [verbose] multus-daemon started\\\\n2025-12-03T00:06:38Z [verbose] Readiness Indicator file check\\\\n2025-12-03T00:07:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.983975 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:23 crc kubenswrapper[4805]: I1203 00:07:23.995360 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:23Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.007309 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.007339 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.007347 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.007364 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.007374 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:24Z","lastTransitionTime":"2025-12-03T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.011438 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.028854 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.039989 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.050490 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.059047 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.069424 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aeebcab06f1b267bc2e583a2fd8f15328de9c1767ba1d26c75641699f41e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71768f7cf0016d2b7a206f0536cdecd4d112e64661010318877f2601b9ebbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x476n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.077776 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829c74e-7807-4b31-9b2a-2482ec95a235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q4nqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.088635 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.098613 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6265ed86-69ce-4136-85a8-9b7e3801d5bf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10721a21c16e0eb75e863b08e07a09c628f5e1f6ae17d60c3c65010188d04ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c67b169b1480f166cc6166b80f1371fda8965fcd58b25153a1d3befd74af36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e723695c2c26ea7f587aada1b84da8caf1fba5ebbaf8b5665de0c33f8c0d02ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.109449 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.109506 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.109520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.109546 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.109562 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:24Z","lastTransitionTime":"2025-12-03T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.211906 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.211988 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.212111 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.212341 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.212360 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:24Z","lastTransitionTime":"2025-12-03T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.314422 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.314474 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.314484 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.314499 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.314508 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:24Z","lastTransitionTime":"2025-12-03T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.417901 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.417940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.417948 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.417963 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.417973 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:24Z","lastTransitionTime":"2025-12-03T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.423244 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.423293 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:24 crc kubenswrapper[4805]: E1203 00:07:24.423365 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.423247 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:24 crc kubenswrapper[4805]: E1203 00:07:24.423438 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:24 crc kubenswrapper[4805]: E1203 00:07:24.423590 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.520167 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.520527 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.520594 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.520680 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.520744 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:24Z","lastTransitionTime":"2025-12-03T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.623798 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.623868 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.623880 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.623898 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.623909 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:24Z","lastTransitionTime":"2025-12-03T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.726442 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.726496 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.726506 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.726520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.726530 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:24Z","lastTransitionTime":"2025-12-03T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.829004 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.829455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.829519 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.829555 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.829576 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:24Z","lastTransitionTime":"2025-12-03T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.856155 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lllfh_839326a5-41df-492f-83c4-3ee9e2964dc8/kube-multus/0.log" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.856232 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lllfh" event={"ID":"839326a5-41df-492f-83c4-3ee9e2964dc8","Type":"ContainerStarted","Data":"00da5927fb0861f1f7e960b661c25cc0fe480a8e82b8708f9d7df8b11ac130fe"} Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.870923 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.886058 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.898820 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00da5927fb0861f1f7e960b661c25cc0fe480a8e82b8708f9d7df8b11ac130fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:23Z\\\",\\\"message\\\":\\\"2025-12-03T00:06:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b185583-eef9-44d6-bd2e-697ba2a0e55c\\\\n2025-12-03T00:06:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b185583-eef9-44d6-bd2e-697ba2a0e55c to /host/opt/cni/bin/\\\\n2025-12-03T00:06:38Z [verbose] multus-daemon started\\\\n2025-12-03T00:06:38Z [verbose] Readiness Indicator file check\\\\n2025-12-03T00:07:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.920686 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:07Z\\\",\\\"message\\\":\\\"/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:07:07.360788 6476 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:07:07.370348 6476 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 00:07:07.370442 6476 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 00:07:07.370475 6476 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 00:07:07.370483 6476 factory.go:656] Stopping watch factory\\\\nI1203 00:07:07.370504 6476 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 00:07:07.370574 6476 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:07:07.370732 6476 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 00:07:07.373287 6476 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1203 00:07:07.373320 6476 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1203 00:07:07.373397 6476 ovnkube.go:599] Stopped ovnkube\\\\nI1203 00:07:07.373443 6476 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 00:07:07.373536 6476 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:07:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k6pk5_openshift-ovn-kubernetes(2dbad567-2c97-49dd-ac90-41fd66a3b606)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.932310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.932367 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.932378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.932396 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.932407 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:24Z","lastTransitionTime":"2025-12-03T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.933945 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.945649 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.958751 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.968118 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.979656 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:24 crc kubenswrapper[4805]: I1203 00:07:24.993704 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6265ed86-69ce-4136-85a8-9b7e3801d5bf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10721a21c16e0eb75e863b08e07a09c628f5e1f6ae17d60c3c65010188d04ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c67b169b1480f166cc6166b80f1371fda8965fcd58b25153a1d3befd74af36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e723695c2c26ea7f587aada1b84da8caf1fba5ebbaf8b5665de0c33f8c0d02ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:24Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.005496 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.014887 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.024897 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.034518 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.034555 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.034564 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.034579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.034593 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:25Z","lastTransitionTime":"2025-12-03T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.035770 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aeebcab06f1b267bc2e583a2fd8f15328de9c1767ba1d26c75641699f41e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71768f7cf0016d2b7a206f0536cdecd4d112e64661010318877f2601b9ebbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x476n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.047156 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829c74e-7807-4b31-9b2a-2482ec95a235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q4nqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.067380 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.079384 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.093368 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:25Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.137268 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.137307 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.137325 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.137338 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.137347 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:25Z","lastTransitionTime":"2025-12-03T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.240077 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.240402 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.240495 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.240583 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.240707 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:25Z","lastTransitionTime":"2025-12-03T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.343189 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.343260 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.343272 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.343289 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.343300 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:25Z","lastTransitionTime":"2025-12-03T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.423076 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:25 crc kubenswrapper[4805]: E1203 00:07:25.423274 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.445761 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.445805 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.445815 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.445835 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.445849 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:25Z","lastTransitionTime":"2025-12-03T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.549128 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.549233 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.549254 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.549281 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.549299 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:25Z","lastTransitionTime":"2025-12-03T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.651953 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.652002 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.652053 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.652075 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.652089 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:25Z","lastTransitionTime":"2025-12-03T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.755021 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.755072 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.755088 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.755110 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.755128 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:25Z","lastTransitionTime":"2025-12-03T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.858563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.858644 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.858657 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.858698 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.858714 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:25Z","lastTransitionTime":"2025-12-03T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.961936 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.962011 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.962029 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.962058 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:25 crc kubenswrapper[4805]: I1203 00:07:25.962076 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:25Z","lastTransitionTime":"2025-12-03T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.065701 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.065773 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.065791 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.065821 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.065841 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:26Z","lastTransitionTime":"2025-12-03T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.169880 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.169943 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.169964 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.169992 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.170015 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:26Z","lastTransitionTime":"2025-12-03T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.273093 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.273134 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.273145 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.273159 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.273170 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:26Z","lastTransitionTime":"2025-12-03T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.375529 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.375564 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.375573 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.375588 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.375597 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:26Z","lastTransitionTime":"2025-12-03T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.423169 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.423236 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.423178 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:26 crc kubenswrapper[4805]: E1203 00:07:26.423380 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:26 crc kubenswrapper[4805]: E1203 00:07:26.423587 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:26 crc kubenswrapper[4805]: E1203 00:07:26.423303 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.441607 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.452038 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.466318 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.477941 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.478012 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.478028 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.478045 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.478084 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:26Z","lastTransitionTime":"2025-12-03T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.483720 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.497838 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.511227 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00da5927fb0861f1f7e960b661c25cc0fe480a8e82b8708f9d7df8b11ac130fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:23Z\\\",\\\"message\\\":\\\"2025-12-03T00:06:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b185583-eef9-44d6-bd2e-697ba2a0e55c\\\\n2025-12-03T00:06:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b185583-eef9-44d6-bd2e-697ba2a0e55c to /host/opt/cni/bin/\\\\n2025-12-03T00:06:38Z [verbose] multus-daemon started\\\\n2025-12-03T00:06:38Z [verbose] Readiness Indicator file check\\\\n2025-12-03T00:07:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.530931 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:07Z\\\",\\\"message\\\":\\\"/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:07:07.360788 6476 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:07:07.370348 6476 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 00:07:07.370442 6476 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 00:07:07.370475 6476 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 00:07:07.370483 6476 factory.go:656] Stopping watch factory\\\\nI1203 00:07:07.370504 6476 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 00:07:07.370574 6476 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:07:07.370732 6476 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 00:07:07.373287 6476 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1203 00:07:07.373320 6476 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1203 00:07:07.373397 6476 ovnkube.go:599] Stopped ovnkube\\\\nI1203 00:07:07.373443 6476 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 00:07:07.373536 6476 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:07:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-k6pk5_openshift-ovn-kubernetes(2dbad567-2c97-49dd-ac90-41fd66a3b606)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.544652 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.557411 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.571705 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.580851 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.580894 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.580905 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.580925 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.580934 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:26Z","lastTransitionTime":"2025-12-03T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.583800 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.593567 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aeebcab06f1b267bc2e583a2fd8f15328de9c1767ba1d26c75641699f41e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71768f7cf0016d2b7a206f0536cdecd4d112e64661010318877f2601b9ebbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x476n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.606898 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829c74e-7807-4b31-9b2a-2482ec95a235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q4nqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.617629 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.630311 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6265ed86-69ce-4136-85a8-9b7e3801d5bf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10721a21c16e0eb75e863b08e07a09c628f5e1f6ae17d60c3c65010188d04ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c67b169b1480f166cc6166b80f1371fda8965fcd58b25153a1d3befd74af36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e723695c2c26ea7f587aada1b84da8caf1fba5ebbaf8b5665de0c33f8c0d02ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.645677 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.656165 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.666341 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:26Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.682960 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.683001 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.683012 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.683030 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.683040 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:26Z","lastTransitionTime":"2025-12-03T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.785240 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.785284 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.785293 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.785310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.785321 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:26Z","lastTransitionTime":"2025-12-03T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.887924 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.887969 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.887981 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.887996 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.888007 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:26Z","lastTransitionTime":"2025-12-03T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.990523 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.990580 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.990594 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.990613 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:26 crc kubenswrapper[4805]: I1203 00:07:26.990966 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:26Z","lastTransitionTime":"2025-12-03T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.093438 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.093480 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.093490 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.093522 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.093534 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:27Z","lastTransitionTime":"2025-12-03T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.196145 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.196188 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.196211 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.196234 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.196255 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:27Z","lastTransitionTime":"2025-12-03T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.298805 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.298872 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.298911 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.298943 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.298953 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:27Z","lastTransitionTime":"2025-12-03T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.401686 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.401726 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.401737 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.401752 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.401765 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:27Z","lastTransitionTime":"2025-12-03T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.423127 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:27 crc kubenswrapper[4805]: E1203 00:07:27.423248 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.504419 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.504463 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.504471 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.504484 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.504493 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:27Z","lastTransitionTime":"2025-12-03T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.606846 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.606883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.606894 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.606911 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.606921 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:27Z","lastTransitionTime":"2025-12-03T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.709566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.709599 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.709608 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.709621 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.709631 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:27Z","lastTransitionTime":"2025-12-03T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.812325 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.812378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.812394 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.812417 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.812434 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:27Z","lastTransitionTime":"2025-12-03T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.915584 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.915625 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.915636 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.915650 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:27 crc kubenswrapper[4805]: I1203 00:07:27.915660 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:27Z","lastTransitionTime":"2025-12-03T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.018725 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.018775 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.018786 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.018804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.018815 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:28Z","lastTransitionTime":"2025-12-03T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.122332 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.122386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.122401 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.122420 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.122431 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:28Z","lastTransitionTime":"2025-12-03T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.225164 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.225252 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.225265 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.225284 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.225299 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:28Z","lastTransitionTime":"2025-12-03T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.328221 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.328269 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.328282 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.328298 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.328310 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:28Z","lastTransitionTime":"2025-12-03T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.422260 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.422369 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:28 crc kubenswrapper[4805]: E1203 00:07:28.422417 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.422260 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:28 crc kubenswrapper[4805]: E1203 00:07:28.422554 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:28 crc kubenswrapper[4805]: E1203 00:07:28.422823 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.430554 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.430625 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.430646 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.430669 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.430690 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:28Z","lastTransitionTime":"2025-12-03T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.439646 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.533656 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.533709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.533722 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.533743 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.533758 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:28Z","lastTransitionTime":"2025-12-03T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.636434 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.636502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.636527 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.636562 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.636586 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:28Z","lastTransitionTime":"2025-12-03T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.739890 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.739952 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.739963 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.739981 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.739993 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:28Z","lastTransitionTime":"2025-12-03T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.842486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.842529 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.842540 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.842556 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.842568 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:28Z","lastTransitionTime":"2025-12-03T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.945335 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.945384 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.945399 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.945417 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:28 crc kubenswrapper[4805]: I1203 00:07:28.945431 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:28Z","lastTransitionTime":"2025-12-03T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.047208 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.047243 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.047253 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.047269 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.047280 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.150288 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.150339 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.150351 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.150366 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.150375 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.252423 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.252461 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.252470 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.252488 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.252526 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.355904 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.355956 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.355969 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.355984 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.355995 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.423320 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:29 crc kubenswrapper[4805]: E1203 00:07:29.423495 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.458682 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.458997 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.459082 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.459175 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.459303 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.496341 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.496642 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.496794 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.496998 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.497129 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4805]: E1203 00:07:29.509014 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:29Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.513285 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.513465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.513531 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.513605 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.513667 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4805]: E1203 00:07:29.527559 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:29Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.531294 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.531315 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.531324 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.531337 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.531347 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4805]: E1203 00:07:29.544348 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:29Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.548154 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.548237 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.548249 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.548265 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.548276 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4805]: E1203 00:07:29.562436 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:29Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.567053 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.567115 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.567128 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.567147 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.567176 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4805]: E1203 00:07:29.581302 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:29Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:29 crc kubenswrapper[4805]: E1203 00:07:29.581465 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.583525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.583641 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.583712 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.583780 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.583846 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.687325 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.687641 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.687812 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.687948 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.688082 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.791532 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.792009 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.792291 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.792519 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.792696 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.895332 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.895379 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.895395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.895417 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.895436 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.998034 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.998066 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.998074 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.998087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:29 crc kubenswrapper[4805]: I1203 00:07:29.998111 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:29Z","lastTransitionTime":"2025-12-03T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.101516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.101585 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.101606 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.101631 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.101648 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:30Z","lastTransitionTime":"2025-12-03T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.204728 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.205146 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.205356 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.205518 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.205653 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:30Z","lastTransitionTime":"2025-12-03T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.308746 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.308839 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.308869 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.308892 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.308912 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:30Z","lastTransitionTime":"2025-12-03T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.412379 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.412447 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.412470 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.412501 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.412524 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:30Z","lastTransitionTime":"2025-12-03T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.423253 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.423347 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.423403 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:30 crc kubenswrapper[4805]: E1203 00:07:30.423669 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:30 crc kubenswrapper[4805]: E1203 00:07:30.423866 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:30 crc kubenswrapper[4805]: E1203 00:07:30.424050 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.515987 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.516060 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.516086 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.516117 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.516142 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:30Z","lastTransitionTime":"2025-12-03T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.619375 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.619463 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.619513 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.619537 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.619554 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:30Z","lastTransitionTime":"2025-12-03T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.722124 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.722292 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.722310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.722333 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.722353 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:30Z","lastTransitionTime":"2025-12-03T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.825148 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.825451 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.825552 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.825629 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.825708 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:30Z","lastTransitionTime":"2025-12-03T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.929697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.929744 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.929754 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.929770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:30 crc kubenswrapper[4805]: I1203 00:07:30.929781 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:30Z","lastTransitionTime":"2025-12-03T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.033116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.033470 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.033559 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.033643 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.033718 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:31Z","lastTransitionTime":"2025-12-03T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.136656 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.136741 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.136766 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.136799 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.136822 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:31Z","lastTransitionTime":"2025-12-03T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.239368 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.239421 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.239436 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.239456 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.239471 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:31Z","lastTransitionTime":"2025-12-03T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.343643 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.343743 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.343763 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.343792 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.343812 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:31Z","lastTransitionTime":"2025-12-03T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.422891 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:31 crc kubenswrapper[4805]: E1203 00:07:31.423139 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.447467 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.447519 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.447530 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.447551 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.447562 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:31Z","lastTransitionTime":"2025-12-03T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.551013 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.551070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.551080 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.551100 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.551112 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:31Z","lastTransitionTime":"2025-12-03T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.655299 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.655369 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.655395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.655420 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.655440 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:31Z","lastTransitionTime":"2025-12-03T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.758716 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.758799 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.758823 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.758861 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.758974 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:31Z","lastTransitionTime":"2025-12-03T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.862622 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.862981 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.863173 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.863376 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.863535 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:31Z","lastTransitionTime":"2025-12-03T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.966544 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.966584 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.966597 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.966612 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:31 crc kubenswrapper[4805]: I1203 00:07:31.966622 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:31Z","lastTransitionTime":"2025-12-03T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.070129 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.070174 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.070184 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.070229 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.070241 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:32Z","lastTransitionTime":"2025-12-03T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.173628 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.173721 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.173740 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.173760 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.173773 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:32Z","lastTransitionTime":"2025-12-03T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.285524 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.285626 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.285653 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.285697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.285720 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:32Z","lastTransitionTime":"2025-12-03T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.388589 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.388641 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.388652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.388667 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.388681 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:32Z","lastTransitionTime":"2025-12-03T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.422814 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.422917 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.422830 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:32 crc kubenswrapper[4805]: E1203 00:07:32.423039 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:32 crc kubenswrapper[4805]: E1203 00:07:32.423111 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:32 crc kubenswrapper[4805]: E1203 00:07:32.423319 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.491587 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.491651 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.491668 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.491692 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.491711 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:32Z","lastTransitionTime":"2025-12-03T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.595162 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.595229 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.595239 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.595255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.595268 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:32Z","lastTransitionTime":"2025-12-03T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.698057 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.698104 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.698113 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.698126 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.698137 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:32Z","lastTransitionTime":"2025-12-03T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.800593 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.800641 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.800652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.800667 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.800678 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:32Z","lastTransitionTime":"2025-12-03T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.903181 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.903281 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.903305 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.903332 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:32 crc kubenswrapper[4805]: I1203 00:07:32.903356 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:32Z","lastTransitionTime":"2025-12-03T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.006696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.006748 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.006764 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.006783 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.006799 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.110866 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.110967 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.111007 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.111115 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.111194 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.215425 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.215456 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.215464 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.215477 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.215486 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.318390 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.318444 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.318456 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.318476 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.318489 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.421364 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.421410 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.421426 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.421465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.421482 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.422953 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:33 crc kubenswrapper[4805]: E1203 00:07:33.423235 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.424382 4805 scope.go:117] "RemoveContainer" containerID="226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.524148 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.524189 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.524216 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.524237 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.524247 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.626665 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.626709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.626725 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.626744 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.626758 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.728959 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.729014 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.729026 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.729047 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.729062 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.831641 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.831682 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.831691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.831705 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.831714 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.887734 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6pk5_2dbad567-2c97-49dd-ac90-41fd66a3b606/ovnkube-controller/2.log" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.890366 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerStarted","Data":"9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0"} Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.890869 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.914541 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.927967 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.936676 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.936732 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.936746 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.936766 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.936970 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:33Z","lastTransitionTime":"2025-12-03T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.945965 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.959652 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.971643 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:33 crc kubenswrapper[4805]: I1203 00:07:33.984943 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00da5927fb0861f1f7e960b661c25cc0fe480a8e82b8708f9d7df8b11ac130fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:23Z\\\",\\\"message\\\":\\\"2025-12-03T00:06:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b185583-eef9-44d6-bd2e-697ba2a0e55c\\\\n2025-12-03T00:06:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b185583-eef9-44d6-bd2e-697ba2a0e55c to /host/opt/cni/bin/\\\\n2025-12-03T00:06:38Z [verbose] multus-daemon started\\\\n2025-12-03T00:06:38Z [verbose] Readiness Indicator file check\\\\n2025-12-03T00:07:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.002381 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:07Z\\\",\\\"message\\\":\\\"/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:07:07.360788 6476 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:07:07.370348 6476 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 00:07:07.370442 6476 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 00:07:07.370475 6476 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 00:07:07.370483 6476 factory.go:656] Stopping watch factory\\\\nI1203 00:07:07.370504 6476 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 00:07:07.370574 6476 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:07:07.370732 6476 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 00:07:07.373287 6476 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1203 00:07:07.373320 6476 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1203 00:07:07.373397 6476 ovnkube.go:599] Stopped ovnkube\\\\nI1203 00:07:07.373443 6476 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 00:07:07.373536 6476 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:07:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:34Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.020799 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:34Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.036015 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:34Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.039559 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.039609 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.039625 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.039643 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.039656 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:34Z","lastTransitionTime":"2025-12-03T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.056435 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:34Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.068364 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:34Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.080319 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aeebcab06f1b267bc2e583a2fd8f15328de9c1767ba1d26c75641699f41e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71768f7cf0016d2b7a206f0536cdecd4d112e64661010318877f2601b9ebbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x476n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:34Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.093408 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829c74e-7807-4b31-9b2a-2482ec95a235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q4nqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:34Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.118136 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595dab30-121e-47a9-9d14-b3e0658b0da2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b9bd6a2ff9fb5775064416b40e7c898532700f85cd1f6fc749af0fb3618454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a605790bb8690284503c4ff0054b3bbd6f81ad0822fa614d1bb0f7e476c2211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a605790bb8690284503c4ff0054b3bbd6f81ad0822fa614d1bb0f7e476c2211\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:34Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.140385 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:34Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.141872 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.141892 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.141900 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.141914 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.141923 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:34Z","lastTransitionTime":"2025-12-03T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.161867 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6265ed86-69ce-4136-85a8-9b7e3801d5bf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10721a21c16e0eb75e863b08e07a09c628f5e1f6ae17d60c3c65010188d04ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c67b169b1480f166cc6166b80f1371fda8965fcd58b25153a1d3befd74af36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e723695c2c26ea7f587aada1b84da8caf1fba5ebbaf8b5665de0c33f8c0d02ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:34Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.177363 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:34Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.195154 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:34Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.206086 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:34Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.244023 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.244060 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.244070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.244086 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.244095 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:34Z","lastTransitionTime":"2025-12-03T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.346673 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.346721 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.346734 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.346750 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.346762 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:34Z","lastTransitionTime":"2025-12-03T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.422833 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.422903 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.422833 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:34 crc kubenswrapper[4805]: E1203 00:07:34.422997 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:34 crc kubenswrapper[4805]: E1203 00:07:34.423072 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:34 crc kubenswrapper[4805]: E1203 00:07:34.423145 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.449101 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.449155 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.449166 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.449185 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.449221 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:34Z","lastTransitionTime":"2025-12-03T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.551416 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.551460 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.551470 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.551489 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.551499 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:34Z","lastTransitionTime":"2025-12-03T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.654519 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.654584 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.654613 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.654636 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.654654 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:34Z","lastTransitionTime":"2025-12-03T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.757602 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.757643 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.757652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.757665 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.757675 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:34Z","lastTransitionTime":"2025-12-03T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.860567 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.860618 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.860630 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.860650 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:34 crc kubenswrapper[4805]: I1203 00:07:34.860662 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:34Z","lastTransitionTime":"2025-12-03T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.034744 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.034791 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.034800 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.034815 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.034828 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:35Z","lastTransitionTime":"2025-12-03T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.137478 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.137533 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.137543 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.137563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.137574 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:35Z","lastTransitionTime":"2025-12-03T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.240303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.240352 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.240361 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.240377 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.240387 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:35Z","lastTransitionTime":"2025-12-03T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.344893 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.344944 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.344955 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.344971 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.344981 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:35Z","lastTransitionTime":"2025-12-03T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.422695 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:35 crc kubenswrapper[4805]: E1203 00:07:35.422875 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.448615 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.448671 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.448683 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.448699 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.448710 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:35Z","lastTransitionTime":"2025-12-03T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.550776 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.550811 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.550819 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.550835 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.550844 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:35Z","lastTransitionTime":"2025-12-03T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.653491 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.653554 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.653567 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.653587 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.653598 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:35Z","lastTransitionTime":"2025-12-03T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.756606 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.756672 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.756688 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.756709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.756721 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:35Z","lastTransitionTime":"2025-12-03T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.859918 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.859962 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.859971 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.859986 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.859996 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:35Z","lastTransitionTime":"2025-12-03T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.900973 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6pk5_2dbad567-2c97-49dd-ac90-41fd66a3b606/ovnkube-controller/3.log" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.901737 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6pk5_2dbad567-2c97-49dd-ac90-41fd66a3b606/ovnkube-controller/2.log" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.906017 4805 generic.go:334] "Generic (PLEG): container finished" podID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerID="9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0" exitCode=1 Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.906105 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerDied","Data":"9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0"} Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.906342 4805 scope.go:117] "RemoveContainer" containerID="226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.907383 4805 scope.go:117] "RemoveContainer" containerID="9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0" Dec 03 00:07:35 crc kubenswrapper[4805]: E1203 00:07:35.907642 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k6pk5_openshift-ovn-kubernetes(2dbad567-2c97-49dd-ac90-41fd66a3b606)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.922112 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.938034 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.949674 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.959973 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.963506 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.963530 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.963538 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.963583 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.963592 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:35Z","lastTransitionTime":"2025-12-03T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.973849 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595dab30-121e-47a9-9d14-b3e0658b0da2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b9bd6a2ff9fb5775064416b40e7c898532700f85cd1f6fc749af0fb3618454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a605790bb8690284503c4ff0054b3bbd6f81ad0822fa614d1bb0f7e476c2211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a605790bb8690284503c4ff0054b3bbd6f81ad0822fa614d1bb0f7e476c2211\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.987341 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:35 crc kubenswrapper[4805]: I1203 00:07:35.998268 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6265ed86-69ce-4136-85a8-9b7e3801d5bf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10721a21c16e0eb75e863b08e07a09c628f5e1f6ae17d60c3c65010188d04ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c67b169b1480f166cc6166b80f1371fda8965fcd58b25153a1d3befd74af36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e723695c2c26ea7f587aada1b84da8caf1fba5ebbaf8b5665de0c33f8c0d02ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:35Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.008719 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.018837 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.028081 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.038517 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aeebcab06f1b267bc2e583a2fd8f15328de9c1767ba1d26c75641699f41e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71768f7cf0016d2b7a206f0536cdecd4d112e64661010318877f2601b9ebbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x476n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.047707 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829c74e-7807-4b31-9b2a-2482ec95a235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q4nqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.064862 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.065908 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.065940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.065949 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.065963 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.065972 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:36Z","lastTransitionTime":"2025-12-03T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.075261 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.087381 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.102276 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.114191 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.128015 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00da5927fb0861f1f7e960b661c25cc0fe480a8e82b8708f9d7df8b11ac130fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:23Z\\\",\\\"message\\\":\\\"2025-12-03T00:06:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b185583-eef9-44d6-bd2e-697ba2a0e55c\\\\n2025-12-03T00:06:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b185583-eef9-44d6-bd2e-697ba2a0e55c to /host/opt/cni/bin/\\\\n2025-12-03T00:06:38Z [verbose] multus-daemon started\\\\n2025-12-03T00:06:38Z [verbose] Readiness Indicator file check\\\\n2025-12-03T00:07:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.147158 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:07Z\\\",\\\"message\\\":\\\"/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:07:07.360788 6476 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:07:07.370348 6476 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 00:07:07.370442 6476 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 00:07:07.370475 6476 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 00:07:07.370483 6476 factory.go:656] Stopping watch factory\\\\nI1203 00:07:07.370504 6476 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 00:07:07.370574 6476 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:07:07.370732 6476 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 00:07:07.373287 6476 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1203 00:07:07.373320 6476 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1203 00:07:07.373397 6476 ovnkube.go:599] Stopped ovnkube\\\\nI1203 00:07:07.373443 6476 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 00:07:07.373536 6476 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:07:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:35Z\\\",\\\"message\\\":\\\"00:07:34.654076 6802 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1203 00:07:34.655613 6802 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1203 00:07:34.655620 6802 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1203 00:07:34.655637 6802 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI1203 00:07:34.655732 6802 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 00:07:34.654495 6802 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.168936 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.169167 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.169351 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.169448 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.169523 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:36Z","lastTransitionTime":"2025-12-03T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.272811 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.273160 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.273270 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.273375 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.273459 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:36Z","lastTransitionTime":"2025-12-03T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.375698 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.375736 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.375744 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.375758 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.375768 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:36Z","lastTransitionTime":"2025-12-03T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.422985 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.423056 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.423110 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:36 crc kubenswrapper[4805]: E1203 00:07:36.423130 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:36 crc kubenswrapper[4805]: E1203 00:07:36.423407 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:36 crc kubenswrapper[4805]: E1203 00:07:36.423453 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.437223 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829c74e-7807-4b31-9b2a-2482ec95a235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q4nqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.447458 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595dab30-121e-47a9-9d14-b3e0658b0da2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b9bd6a2ff9fb5775064416b40e7c898532700f85cd1f6fc749af0fb3618454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a605790bb8690284503c4ff0054b3bbd6f81ad0822fa614d1bb0f7e476c2211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a605790bb8690284503c4ff0054b3bbd6f81ad0822fa614d1bb0f7e476c2211\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.458340 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.469313 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6265ed86-69ce-4136-85a8-9b7e3801d5bf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10721a21c16e0eb75e863b08e07a09c628f5e1f6ae17d60c3c65010188d04ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c67b169b1480f166cc6166b80f1371fda8965fcd58b25153a1d3befd74af36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e723695c2c26ea7f587aada1b84da8caf1fba5ebbaf8b5665de0c33f8c0d02ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.478558 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.478735 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.478829 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.478914 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.478990 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:36Z","lastTransitionTime":"2025-12-03T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.481916 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.492425 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.501552 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.511789 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aeebcab06f1b267bc2e583a2fd8f15328de9c1767ba1d26c75641699f41e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71768f7cf0016d2b7a206f0536cdecd4d112e64661010318877f2601b9ebbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x476n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.534964 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.546724 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.559506 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.572218 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.582521 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.582551 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.582560 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.582577 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.582588 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:36Z","lastTransitionTime":"2025-12-03T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.583409 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.596032 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00da5927fb0861f1f7e960b661c25cc0fe480a8e82b8708f9d7df8b11ac130fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:23Z\\\",\\\"message\\\":\\\"2025-12-03T00:06:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b185583-eef9-44d6-bd2e-697ba2a0e55c\\\\n2025-12-03T00:06:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b185583-eef9-44d6-bd2e-697ba2a0e55c to /host/opt/cni/bin/\\\\n2025-12-03T00:06:38Z [verbose] multus-daemon started\\\\n2025-12-03T00:06:38Z [verbose] Readiness Indicator file check\\\\n2025-12-03T00:07:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.611741 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:07Z\\\",\\\"message\\\":\\\"/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:07:07.360788 6476 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:07:07.370348 6476 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 00:07:07.370442 6476 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 00:07:07.370475 6476 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 00:07:07.370483 6476 factory.go:656] Stopping watch factory\\\\nI1203 00:07:07.370504 6476 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 00:07:07.370574 6476 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:07:07.370732 6476 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 00:07:07.373287 6476 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1203 00:07:07.373320 6476 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1203 00:07:07.373397 6476 ovnkube.go:599] Stopped ovnkube\\\\nI1203 00:07:07.373443 6476 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 00:07:07.373536 6476 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:07:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:35Z\\\",\\\"message\\\":\\\"00:07:34.654076 6802 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1203 00:07:34.655613 6802 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1203 00:07:34.655620 6802 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1203 00:07:34.655637 6802 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI1203 00:07:34.655732 6802 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 00:07:34.654495 6802 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.622149 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.633096 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.644137 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.654370 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:36Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.685169 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.685231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.685247 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.685263 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.685275 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:36Z","lastTransitionTime":"2025-12-03T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.787515 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.787564 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.787576 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.787594 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.787606 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:36Z","lastTransitionTime":"2025-12-03T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.889477 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.889509 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.889520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.889532 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.889541 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:36Z","lastTransitionTime":"2025-12-03T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.910158 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6pk5_2dbad567-2c97-49dd-ac90-41fd66a3b606/ovnkube-controller/3.log" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.991980 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.992021 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.992031 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.992046 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:36 crc kubenswrapper[4805]: I1203 00:07:36.992056 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:36Z","lastTransitionTime":"2025-12-03T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.094552 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.094627 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.094646 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.094671 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.094690 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:37Z","lastTransitionTime":"2025-12-03T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.197078 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.197120 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.197132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.197149 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.197161 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:37Z","lastTransitionTime":"2025-12-03T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.302446 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.302516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.302538 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.302557 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.302574 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:37Z","lastTransitionTime":"2025-12-03T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.404685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.404805 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.404835 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.404866 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.404886 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:37Z","lastTransitionTime":"2025-12-03T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.423013 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:37 crc kubenswrapper[4805]: E1203 00:07:37.423175 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.508070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.508137 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.508159 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.508189 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.508244 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:37Z","lastTransitionTime":"2025-12-03T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.610777 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.610828 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.610841 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.610858 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.610871 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:37Z","lastTransitionTime":"2025-12-03T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.714247 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.714302 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.714319 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.714345 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.714363 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:37Z","lastTransitionTime":"2025-12-03T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.817521 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.817804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.817962 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.818100 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.818237 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:37Z","lastTransitionTime":"2025-12-03T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.920597 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.920635 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.920679 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.920694 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:37 crc kubenswrapper[4805]: I1203 00:07:37.920705 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:37Z","lastTransitionTime":"2025-12-03T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.022733 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.022781 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.022793 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.022808 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.022817 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:38Z","lastTransitionTime":"2025-12-03T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.126741 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.126775 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.126790 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.126806 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.126817 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:38Z","lastTransitionTime":"2025-12-03T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.229517 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.229851 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.229946 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.230031 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.230122 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:38Z","lastTransitionTime":"2025-12-03T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.268226 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.268345 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.268417 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:38 crc kubenswrapper[4805]: E1203 00:07:38.268514 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:07:38 crc kubenswrapper[4805]: E1203 00:07:38.268579 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:42.268556287 +0000 UTC m=+146.117518903 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 00:07:38 crc kubenswrapper[4805]: E1203 00:07:38.268844 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:42.268832095 +0000 UTC m=+146.117794711 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:07:38 crc kubenswrapper[4805]: E1203 00:07:38.268856 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:07:38 crc kubenswrapper[4805]: E1203 00:07:38.269083 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:42.269068451 +0000 UTC m=+146.118031067 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.332340 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.332387 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.332399 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.332413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.332424 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:38Z","lastTransitionTime":"2025-12-03T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.369181 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.369277 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:38 crc kubenswrapper[4805]: E1203 00:07:38.369441 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:07:38 crc kubenswrapper[4805]: E1203 00:07:38.369454 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 00:07:38 crc kubenswrapper[4805]: E1203 00:07:38.369480 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:07:38 crc kubenswrapper[4805]: E1203 00:07:38.369495 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:38 crc kubenswrapper[4805]: E1203 00:07:38.369461 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 00:07:38 crc kubenswrapper[4805]: E1203 00:07:38.369558 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:42.369534717 +0000 UTC m=+146.218497333 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:38 crc kubenswrapper[4805]: E1203 00:07:38.369565 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:38 crc kubenswrapper[4805]: E1203 00:07:38.369598 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:42.369587439 +0000 UTC m=+146.218550045 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.422316 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.422395 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.422441 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:38 crc kubenswrapper[4805]: E1203 00:07:38.422746 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:38 crc kubenswrapper[4805]: E1203 00:07:38.422937 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:38 crc kubenswrapper[4805]: E1203 00:07:38.422994 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.434404 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.434559 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.434675 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.434779 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.434875 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:38Z","lastTransitionTime":"2025-12-03T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.537717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.538097 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.538255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.538395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.538521 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:38Z","lastTransitionTime":"2025-12-03T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.642091 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.642449 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.642551 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.642701 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.642806 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:38Z","lastTransitionTime":"2025-12-03T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.745982 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.746042 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.746054 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.746072 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.746085 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:38Z","lastTransitionTime":"2025-12-03T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.848315 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.848373 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.848386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.848405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.848418 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:38Z","lastTransitionTime":"2025-12-03T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.950908 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.950957 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.950973 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.950994 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:38 crc kubenswrapper[4805]: I1203 00:07:38.951009 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:38Z","lastTransitionTime":"2025-12-03T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.053654 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.053689 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.053697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.053710 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.053719 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.157311 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.157382 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.157399 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.157427 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.157448 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.259961 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.260000 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.260008 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.260092 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.260104 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.362993 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.363075 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.363093 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.363121 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.363140 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.422700 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:39 crc kubenswrapper[4805]: E1203 00:07:39.422892 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.466297 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.466354 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.466370 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.466388 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.466402 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.569387 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.569443 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.569453 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.569473 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.569485 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.644638 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.644684 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.644692 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.644705 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.644715 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4805]: E1203 00:07:39.657821 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.662105 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.662147 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.662158 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.662174 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.662185 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4805]: E1203 00:07:39.677315 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.682279 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.682317 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.682326 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.682340 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.682351 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4805]: E1203 00:07:39.694502 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.703296 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.703367 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.703397 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.703423 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.703443 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4805]: E1203 00:07:39.722981 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.728280 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.728329 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.728344 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.728365 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.728380 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4805]: E1203 00:07:39.748121 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:39Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:39 crc kubenswrapper[4805]: E1203 00:07:39.748344 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.750378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.750422 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.750436 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.750454 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.750467 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.852891 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.852937 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.852951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.852970 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.852983 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.956558 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.956604 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.956724 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.956740 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:39 crc kubenswrapper[4805]: I1203 00:07:39.956784 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:39Z","lastTransitionTime":"2025-12-03T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.059303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.059333 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.059341 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.059354 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.059363 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:40Z","lastTransitionTime":"2025-12-03T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.161766 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.161808 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.161817 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.161833 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.161855 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:40Z","lastTransitionTime":"2025-12-03T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.264582 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.264620 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.264631 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.264646 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.264659 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:40Z","lastTransitionTime":"2025-12-03T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.367171 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.367226 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.367239 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.367255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.367267 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:40Z","lastTransitionTime":"2025-12-03T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.422852 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.422892 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:40 crc kubenswrapper[4805]: E1203 00:07:40.422996 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:40 crc kubenswrapper[4805]: E1203 00:07:40.423253 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.422852 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:40 crc kubenswrapper[4805]: E1203 00:07:40.423431 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.470011 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.470072 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.470109 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.470139 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.470161 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:40Z","lastTransitionTime":"2025-12-03T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.573875 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.573957 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.573981 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.574011 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.574034 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:40Z","lastTransitionTime":"2025-12-03T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.677464 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.677507 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.677516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.677532 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.677541 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:40Z","lastTransitionTime":"2025-12-03T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.780549 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.780673 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.780694 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.780716 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.780732 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:40Z","lastTransitionTime":"2025-12-03T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.883667 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.883744 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.883756 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.883778 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.883791 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:40Z","lastTransitionTime":"2025-12-03T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.986319 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.986431 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.986464 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.986501 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:40 crc kubenswrapper[4805]: I1203 00:07:40.986520 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:40Z","lastTransitionTime":"2025-12-03T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.089362 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.089425 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.089444 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.089461 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.089474 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:41Z","lastTransitionTime":"2025-12-03T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.192519 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.192578 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.192595 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.192618 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.192641 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:41Z","lastTransitionTime":"2025-12-03T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.295105 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.295165 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.295182 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.295243 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.295261 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:41Z","lastTransitionTime":"2025-12-03T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.398091 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.398173 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.398192 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.398276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.398297 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:41Z","lastTransitionTime":"2025-12-03T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.422870 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:41 crc kubenswrapper[4805]: E1203 00:07:41.423036 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.501001 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.501087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.501106 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.501135 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.501166 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:41Z","lastTransitionTime":"2025-12-03T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.604063 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.604119 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.604133 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.604152 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.604167 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:41Z","lastTransitionTime":"2025-12-03T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.706343 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.706389 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.706405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.706428 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.706443 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:41Z","lastTransitionTime":"2025-12-03T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.809474 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.809570 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.809610 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.809649 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.809669 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:41Z","lastTransitionTime":"2025-12-03T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.913038 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.913099 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.913116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.913138 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:41 crc kubenswrapper[4805]: I1203 00:07:41.913156 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:41Z","lastTransitionTime":"2025-12-03T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.016369 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.016415 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.016424 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.016439 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.016449 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:42Z","lastTransitionTime":"2025-12-03T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.121171 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.121244 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.121260 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.121282 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.121300 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:42Z","lastTransitionTime":"2025-12-03T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.224305 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.224350 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.224359 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.224375 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.224385 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:42Z","lastTransitionTime":"2025-12-03T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.327495 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.327538 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.327551 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.327567 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.327578 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:42Z","lastTransitionTime":"2025-12-03T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.422648 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:42 crc kubenswrapper[4805]: E1203 00:07:42.422814 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.423073 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:42 crc kubenswrapper[4805]: E1203 00:07:42.423135 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.423417 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:42 crc kubenswrapper[4805]: E1203 00:07:42.423508 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.429413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.429458 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.429469 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.429487 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.429501 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:42Z","lastTransitionTime":"2025-12-03T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.533039 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.533101 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.533114 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.533132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.533145 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:42Z","lastTransitionTime":"2025-12-03T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.636304 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.636351 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.636361 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.636375 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.636386 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:42Z","lastTransitionTime":"2025-12-03T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.739597 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.739670 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.739681 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.739698 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.739709 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:42Z","lastTransitionTime":"2025-12-03T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.842812 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.842849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.842858 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.842871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.842881 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:42Z","lastTransitionTime":"2025-12-03T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.945565 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.945614 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.945623 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.945640 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:42 crc kubenswrapper[4805]: I1203 00:07:42.945650 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:42Z","lastTransitionTime":"2025-12-03T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.049494 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.049556 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.049576 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.049596 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.049611 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:43Z","lastTransitionTime":"2025-12-03T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.152540 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.152615 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.152635 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.152666 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.152686 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:43Z","lastTransitionTime":"2025-12-03T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.256172 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.256242 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.256252 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.256271 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.256281 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:43Z","lastTransitionTime":"2025-12-03T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.359426 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.359519 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.359540 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.359565 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.359596 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:43Z","lastTransitionTime":"2025-12-03T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.422999 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:43 crc kubenswrapper[4805]: E1203 00:07:43.423194 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.462278 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.462318 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.462329 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.462345 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.462358 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:43Z","lastTransitionTime":"2025-12-03T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.565423 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.565488 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.565509 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.565532 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.565550 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:43Z","lastTransitionTime":"2025-12-03T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.667895 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.667972 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.667990 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.668023 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.668042 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:43Z","lastTransitionTime":"2025-12-03T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.771282 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.771378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.771406 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.771435 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.771453 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:43Z","lastTransitionTime":"2025-12-03T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.874805 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.874887 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.874910 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.874942 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.874968 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:43Z","lastTransitionTime":"2025-12-03T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.977654 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.977750 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.977770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.977796 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:43 crc kubenswrapper[4805]: I1203 00:07:43.977820 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:43Z","lastTransitionTime":"2025-12-03T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.079909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.079955 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.079966 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.079979 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.079990 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:44Z","lastTransitionTime":"2025-12-03T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.183362 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.183398 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.183406 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.183419 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.183428 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:44Z","lastTransitionTime":"2025-12-03T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.285780 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.285847 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.285868 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.285896 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.285919 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:44Z","lastTransitionTime":"2025-12-03T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.388383 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.388421 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.388430 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.388444 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.388455 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:44Z","lastTransitionTime":"2025-12-03T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.422971 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:44 crc kubenswrapper[4805]: E1203 00:07:44.423154 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.423387 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:44 crc kubenswrapper[4805]: E1203 00:07:44.423447 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.423553 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:44 crc kubenswrapper[4805]: E1203 00:07:44.423613 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.490914 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.490966 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.490980 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.490999 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.491011 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:44Z","lastTransitionTime":"2025-12-03T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.593778 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.593826 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.593844 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.593862 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.593875 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:44Z","lastTransitionTime":"2025-12-03T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.696039 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.696088 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.696096 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.696111 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.696121 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:44Z","lastTransitionTime":"2025-12-03T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.799028 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.799074 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.799082 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.799097 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.799107 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:44Z","lastTransitionTime":"2025-12-03T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.904946 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.904988 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.905363 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.905388 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:44 crc kubenswrapper[4805]: I1203 00:07:44.905401 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:44Z","lastTransitionTime":"2025-12-03T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.008235 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.008284 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.008293 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.008308 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.008319 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:45Z","lastTransitionTime":"2025-12-03T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.110539 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.110591 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.110602 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.110619 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.110633 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:45Z","lastTransitionTime":"2025-12-03T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.212945 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.212998 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.213047 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.213065 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.213077 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:45Z","lastTransitionTime":"2025-12-03T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.315873 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.315947 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.315970 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.316002 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.316027 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:45Z","lastTransitionTime":"2025-12-03T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.418005 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.418049 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.418062 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.418078 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.418087 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:45Z","lastTransitionTime":"2025-12-03T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.422540 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:45 crc kubenswrapper[4805]: E1203 00:07:45.422752 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.520936 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.520985 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.521000 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.521020 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.521055 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:45Z","lastTransitionTime":"2025-12-03T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.623806 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.623853 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.623862 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.623875 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.623885 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:45Z","lastTransitionTime":"2025-12-03T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.726598 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.726630 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.726640 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.726652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.726662 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:45Z","lastTransitionTime":"2025-12-03T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.829423 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.829695 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.829757 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.829825 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.829897 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:45Z","lastTransitionTime":"2025-12-03T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.932438 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.932811 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.932880 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.932955 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:45 crc kubenswrapper[4805]: I1203 00:07:45.933026 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:45Z","lastTransitionTime":"2025-12-03T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.036061 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.036113 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.036126 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.036149 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.036169 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:46Z","lastTransitionTime":"2025-12-03T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.139357 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.139412 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.139426 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.139446 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.139459 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:46Z","lastTransitionTime":"2025-12-03T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.242333 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.242386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.242398 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.242414 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.242426 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:46Z","lastTransitionTime":"2025-12-03T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.344663 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.344709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.344724 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.344744 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.344759 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:46Z","lastTransitionTime":"2025-12-03T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.423088 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.423250 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.423372 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:46 crc kubenswrapper[4805]: E1203 00:07:46.423363 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:46 crc kubenswrapper[4805]: E1203 00:07:46.423518 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:46 crc kubenswrapper[4805]: E1203 00:07:46.423667 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.441454 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aeebcab06f1b267bc2e583a2fd8f15328de9c1767ba1d26c75641699f41e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71768f7cf0016d2b7a206f0536cdecd4d112e64661010318877f2601b9ebbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x476n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.447756 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.447818 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.447843 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.447873 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.447896 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:46Z","lastTransitionTime":"2025-12-03T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.457417 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829c74e-7807-4b31-9b2a-2482ec95a235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q4nqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.472273 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595dab30-121e-47a9-9d14-b3e0658b0da2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b9bd6a2ff9fb5775064416b40e7c898532700f85cd1f6fc749af0fb3618454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a605790bb8690284503c4ff0054b3bbd6f81ad0822fa614d1bb0f7e476c2211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a605790bb8690284503c4ff0054b3bbd6f81ad0822fa614d1bb0f7e476c2211\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.491575 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.511005 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6265ed86-69ce-4136-85a8-9b7e3801d5bf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10721a21c16e0eb75e863b08e07a09c628f5e1f6ae17d60c3c65010188d04ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c67b169b1480f166cc6166b80f1371fda8965fcd58b25153a1d3befd74af36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e723695c2c26ea7f587aada1b84da8caf1fba5ebbaf8b5665de0c33f8c0d02ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.533416 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.550112 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.550933 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.551002 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.551027 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.551052 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.551074 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:46Z","lastTransitionTime":"2025-12-03T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.564705 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.586916 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.598729 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.611813 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.624470 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.637144 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.649010 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00da5927fb0861f1f7e960b661c25cc0fe480a8e82b8708f9d7df8b11ac130fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:23Z\\\",\\\"message\\\":\\\"2025-12-03T00:06:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b185583-eef9-44d6-bd2e-697ba2a0e55c\\\\n2025-12-03T00:06:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b185583-eef9-44d6-bd2e-697ba2a0e55c to /host/opt/cni/bin/\\\\n2025-12-03T00:06:38Z [verbose] multus-daemon started\\\\n2025-12-03T00:06:38Z [verbose] Readiness Indicator file check\\\\n2025-12-03T00:07:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.653325 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.653382 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.653395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.653414 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.653433 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:46Z","lastTransitionTime":"2025-12-03T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.668646 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://226fdc46f3aef4175077fbef4473ebb3c1bc01eced8fbea79b443e7743aa21f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:07Z\\\",\\\"message\\\":\\\"/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 00:07:07.360788 6476 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:07:07.370348 6476 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 00:07:07.370442 6476 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 00:07:07.370475 6476 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 00:07:07.370483 6476 factory.go:656] Stopping watch factory\\\\nI1203 00:07:07.370504 6476 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 00:07:07.370574 6476 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 00:07:07.370732 6476 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1203 00:07:07.373287 6476 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1203 00:07:07.373320 6476 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1203 00:07:07.373397 6476 ovnkube.go:599] Stopped ovnkube\\\\nI1203 00:07:07.373443 6476 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 00:07:07.373536 6476 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:07:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:35Z\\\",\\\"message\\\":\\\"00:07:34.654076 6802 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1203 00:07:34.655613 6802 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1203 00:07:34.655620 6802 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1203 00:07:34.655637 6802 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI1203 00:07:34.655732 6802 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 00:07:34.654495 6802 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.681301 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.693211 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.704403 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.714583 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:46Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.756656 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.756697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.756707 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.756724 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.756738 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:46Z","lastTransitionTime":"2025-12-03T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.859043 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.859101 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.859114 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.859133 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.859145 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:46Z","lastTransitionTime":"2025-12-03T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.961157 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.961187 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.961214 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.961227 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:46 crc kubenswrapper[4805]: I1203 00:07:46.961237 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:46Z","lastTransitionTime":"2025-12-03T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.062806 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.062841 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.062850 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.062864 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.062874 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:47Z","lastTransitionTime":"2025-12-03T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.164816 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.164856 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.164872 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.164893 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.164906 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:47Z","lastTransitionTime":"2025-12-03T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.267863 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.267909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.267920 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.267936 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.267946 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:47Z","lastTransitionTime":"2025-12-03T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.370134 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.370182 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.370214 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.370231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.370245 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:47Z","lastTransitionTime":"2025-12-03T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.422741 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:47 crc kubenswrapper[4805]: E1203 00:07:47.422912 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.473603 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.473647 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.473659 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.473674 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.473686 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:47Z","lastTransitionTime":"2025-12-03T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.576010 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.576046 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.576056 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.576072 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.576082 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:47Z","lastTransitionTime":"2025-12-03T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.678767 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.678805 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.678815 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.678828 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.678839 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:47Z","lastTransitionTime":"2025-12-03T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.781675 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.781712 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.781724 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.781743 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.781756 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:47Z","lastTransitionTime":"2025-12-03T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.884545 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.884587 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.884601 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.884619 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.884632 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:47Z","lastTransitionTime":"2025-12-03T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.987399 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.987445 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.987457 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.987474 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:47 crc kubenswrapper[4805]: I1203 00:07:47.987486 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:47Z","lastTransitionTime":"2025-12-03T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.089684 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.089724 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.089735 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.089752 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.089764 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:48Z","lastTransitionTime":"2025-12-03T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.192373 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.192429 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.192446 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.192469 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.192485 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:48Z","lastTransitionTime":"2025-12-03T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.294987 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.295073 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.295113 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.295132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.295143 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:48Z","lastTransitionTime":"2025-12-03T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.398025 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.398070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.398087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.398113 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.398123 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:48Z","lastTransitionTime":"2025-12-03T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.422317 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.422376 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.422313 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:48 crc kubenswrapper[4805]: E1203 00:07:48.422458 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:48 crc kubenswrapper[4805]: E1203 00:07:48.422523 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:48 crc kubenswrapper[4805]: E1203 00:07:48.422590 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.501076 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.501137 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.501147 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.501161 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.501223 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:48Z","lastTransitionTime":"2025-12-03T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.604395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.604451 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.604462 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.604480 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.604492 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:48Z","lastTransitionTime":"2025-12-03T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.706945 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.707068 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.707098 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.707135 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.707156 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:48Z","lastTransitionTime":"2025-12-03T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.810722 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.810787 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.810806 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.810832 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.810855 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:48Z","lastTransitionTime":"2025-12-03T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.915667 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.915736 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.915755 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.915785 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:48 crc kubenswrapper[4805]: I1203 00:07:48.915806 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:48Z","lastTransitionTime":"2025-12-03T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.019185 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.019268 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.019282 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.019299 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.019312 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:49Z","lastTransitionTime":"2025-12-03T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.121699 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.121736 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.121744 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.121758 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.121768 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:49Z","lastTransitionTime":"2025-12-03T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.224725 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.224793 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.224804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.224819 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.224829 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:49Z","lastTransitionTime":"2025-12-03T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.327838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.327882 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.327901 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.327918 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.327932 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:49Z","lastTransitionTime":"2025-12-03T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.423178 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:49 crc kubenswrapper[4805]: E1203 00:07:49.423372 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.424383 4805 scope.go:117] "RemoveContainer" containerID="9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0" Dec 03 00:07:49 crc kubenswrapper[4805]: E1203 00:07:49.424693 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k6pk5_openshift-ovn-kubernetes(2dbad567-2c97-49dd-ac90-41fd66a3b606)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.431107 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.431161 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.431171 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.431186 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.431211 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:49Z","lastTransitionTime":"2025-12-03T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.438527 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.456891 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.470908 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.481749 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.491616 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829c74e-7807-4b31-9b2a-2482ec95a235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q4nqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.501210 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595dab30-121e-47a9-9d14-b3e0658b0da2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b9bd6a2ff9fb5775064416b40e7c898532700f85cd1f6fc749af0fb3618454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a605790bb8690284503c4ff0054b3bbd6f81ad0822fa614d1bb0f7e476c2211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a605790bb8690284503c4ff0054b3bbd6f81ad0822fa614d1bb0f7e476c2211\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.513190 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.523323 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6265ed86-69ce-4136-85a8-9b7e3801d5bf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10721a21c16e0eb75e863b08e07a09c628f5e1f6ae17d60c3c65010188d04ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c67b169b1480f166cc6166b80f1371fda8965fcd58b25153a1d3befd74af36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e723695c2c26ea7f587aada1b84da8caf1fba5ebbaf8b5665de0c33f8c0d02ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.534343 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.534396 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.534408 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.534425 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.534438 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:49Z","lastTransitionTime":"2025-12-03T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.537831 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.552846 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.564296 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.577028 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aeebcab06f1b267bc2e583a2fd8f15328de9c1767ba1d26c75641699f41e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71768f7cf0016d2b7a206f0536cdecd4d112e64661010318877f2601b9ebbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x476n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.598307 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.608672 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.624254 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.637368 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.637497 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.637562 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.637585 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.637601 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:49Z","lastTransitionTime":"2025-12-03T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.639414 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.653265 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.668559 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00da5927fb0861f1f7e960b661c25cc0fe480a8e82b8708f9d7df8b11ac130fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:23Z\\\",\\\"message\\\":\\\"2025-12-03T00:06:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b185583-eef9-44d6-bd2e-697ba2a0e55c\\\\n2025-12-03T00:06:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b185583-eef9-44d6-bd2e-697ba2a0e55c to /host/opt/cni/bin/\\\\n2025-12-03T00:06:38Z [verbose] multus-daemon started\\\\n2025-12-03T00:06:38Z [verbose] Readiness Indicator file check\\\\n2025-12-03T00:07:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.696638 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:35Z\\\",\\\"message\\\":\\\"00:07:34.654076 6802 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1203 00:07:34.655613 6802 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1203 00:07:34.655620 6802 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1203 00:07:34.655637 6802 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI1203 00:07:34.655732 6802 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 00:07:34.654495 6802 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:07:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k6pk5_openshift-ovn-kubernetes(2dbad567-2c97-49dd-ac90-41fd66a3b606)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.740814 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.740851 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.740862 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.740877 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.740888 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:49Z","lastTransitionTime":"2025-12-03T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.754143 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.754175 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.754182 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.754194 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.754224 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:49Z","lastTransitionTime":"2025-12-03T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:49 crc kubenswrapper[4805]: E1203 00:07:49.768238 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.771611 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.771633 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.771641 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.771653 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.771662 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:49Z","lastTransitionTime":"2025-12-03T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:49 crc kubenswrapper[4805]: E1203 00:07:49.784806 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.790269 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.790327 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.790343 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.790358 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.790377 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:49Z","lastTransitionTime":"2025-12-03T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:49 crc kubenswrapper[4805]: E1203 00:07:49.805018 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.808660 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.808714 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.808733 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.808758 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.808775 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:49Z","lastTransitionTime":"2025-12-03T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:49 crc kubenswrapper[4805]: E1203 00:07:49.822118 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.826531 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.826570 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.826583 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.826599 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.826610 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:49Z","lastTransitionTime":"2025-12-03T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:49 crc kubenswrapper[4805]: E1203 00:07:49.840710 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"59e4b4c6-95e9-49e1-956a-d67d8a6ba8db\\\",\\\"systemUUID\\\":\\\"c7baa8da-c025-4f62-9ac6-1cb1b9cf4097\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:49 crc kubenswrapper[4805]: E1203 00:07:49.840861 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.843718 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.843757 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.843772 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.843791 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.843804 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:49Z","lastTransitionTime":"2025-12-03T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.946868 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.946926 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.946946 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.946971 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:49 crc kubenswrapper[4805]: I1203 00:07:49.946992 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:49Z","lastTransitionTime":"2025-12-03T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.050445 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.050504 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.050517 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.050537 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.050551 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:50Z","lastTransitionTime":"2025-12-03T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.154524 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.154602 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.154621 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.154655 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.154676 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:50Z","lastTransitionTime":"2025-12-03T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.258121 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.258183 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.258193 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.258226 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.258239 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:50Z","lastTransitionTime":"2025-12-03T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.362043 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.362111 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.362129 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.362155 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.362174 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:50Z","lastTransitionTime":"2025-12-03T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.423433 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.423589 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.423732 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:50 crc kubenswrapper[4805]: E1203 00:07:50.423909 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:50 crc kubenswrapper[4805]: E1203 00:07:50.424390 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:50 crc kubenswrapper[4805]: E1203 00:07:50.424510 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.465857 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.465897 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.465910 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.465928 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.465940 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:50Z","lastTransitionTime":"2025-12-03T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.570808 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.570909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.570931 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.570959 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.570979 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:50Z","lastTransitionTime":"2025-12-03T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.675132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.675222 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.675238 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.675264 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.675278 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:50Z","lastTransitionTime":"2025-12-03T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.777975 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.778050 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.778074 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.778104 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.778128 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:50Z","lastTransitionTime":"2025-12-03T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.882032 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.882123 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.882149 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.882180 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.882227 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:50Z","lastTransitionTime":"2025-12-03T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.986409 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.986494 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.986515 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.986547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:50 crc kubenswrapper[4805]: I1203 00:07:50.986571 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:50Z","lastTransitionTime":"2025-12-03T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.089907 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.089973 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.089990 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.090016 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.090030 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:51Z","lastTransitionTime":"2025-12-03T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.193349 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.193406 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.193415 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.193437 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.193450 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:51Z","lastTransitionTime":"2025-12-03T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.296892 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.296973 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.296994 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.297027 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.297046 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:51Z","lastTransitionTime":"2025-12-03T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.401135 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.401285 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.401315 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.401386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.401404 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:51Z","lastTransitionTime":"2025-12-03T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.422544 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:51 crc kubenswrapper[4805]: E1203 00:07:51.422781 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.504627 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.504691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.504709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.504738 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.504760 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:51Z","lastTransitionTime":"2025-12-03T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.608119 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.608181 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.608194 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.608256 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.608269 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:51Z","lastTransitionTime":"2025-12-03T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.711736 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.711818 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.711842 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.711899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.711930 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:51Z","lastTransitionTime":"2025-12-03T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.815068 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.815129 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.815139 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.815157 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.815169 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:51Z","lastTransitionTime":"2025-12-03T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.917562 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.917627 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.917636 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.917650 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:51 crc kubenswrapper[4805]: I1203 00:07:51.917660 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:51Z","lastTransitionTime":"2025-12-03T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.020499 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.020557 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.020570 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.020589 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.020601 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:52Z","lastTransitionTime":"2025-12-03T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.124246 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.124316 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.124327 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.124347 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.124362 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:52Z","lastTransitionTime":"2025-12-03T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.227516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.227600 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.227624 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.227657 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.227682 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:52Z","lastTransitionTime":"2025-12-03T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.331067 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.331243 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.331277 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.331497 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.331518 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:52Z","lastTransitionTime":"2025-12-03T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.422445 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.422517 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:52 crc kubenswrapper[4805]: E1203 00:07:52.422618 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:52 crc kubenswrapper[4805]: E1203 00:07:52.422761 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.422821 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:52 crc kubenswrapper[4805]: E1203 00:07:52.422881 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.434101 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.434162 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.434178 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.434208 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.434225 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:52Z","lastTransitionTime":"2025-12-03T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.538093 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.538194 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.538298 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.538334 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.538362 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:52Z","lastTransitionTime":"2025-12-03T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.641783 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.641886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.641908 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.641934 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.641954 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:52Z","lastTransitionTime":"2025-12-03T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.746075 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.746131 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.746140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.746160 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.746173 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:52Z","lastTransitionTime":"2025-12-03T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.849298 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.849376 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.849438 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.849482 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.849503 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:52Z","lastTransitionTime":"2025-12-03T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.953543 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.953610 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.953627 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.953653 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:52 crc kubenswrapper[4805]: I1203 00:07:52.953673 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:52Z","lastTransitionTime":"2025-12-03T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.057469 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.057534 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.057551 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.057576 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.057595 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:53Z","lastTransitionTime":"2025-12-03T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.161056 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.161114 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.161190 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.161258 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.161277 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:53Z","lastTransitionTime":"2025-12-03T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.264723 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.264781 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.264799 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.264826 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.264848 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:53Z","lastTransitionTime":"2025-12-03T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.368566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.368654 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.368678 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.368713 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.368736 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:53Z","lastTransitionTime":"2025-12-03T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.422955 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:53 crc kubenswrapper[4805]: E1203 00:07:53.423222 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.472338 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.472420 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.472454 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.472489 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.472531 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:53Z","lastTransitionTime":"2025-12-03T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.576694 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.576759 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.576776 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.576803 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.576821 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:53Z","lastTransitionTime":"2025-12-03T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.680514 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.680603 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.680623 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.680655 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.680677 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:53Z","lastTransitionTime":"2025-12-03T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.742036 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs\") pod \"network-metrics-daemon-q4nqx\" (UID: \"3829c74e-7807-4b31-9b2a-2482ec95a235\") " pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:53 crc kubenswrapper[4805]: E1203 00:07:53.742328 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:53 crc kubenswrapper[4805]: E1203 00:07:53.742455 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs podName:3829c74e-7807-4b31-9b2a-2482ec95a235 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:57.742421563 +0000 UTC m=+161.591384199 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs") pod "network-metrics-daemon-q4nqx" (UID: "3829c74e-7807-4b31-9b2a-2482ec95a235") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.783770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.783859 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.783880 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.783909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.783928 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:53Z","lastTransitionTime":"2025-12-03T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.886961 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.887038 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.887057 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.887085 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.887104 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:53Z","lastTransitionTime":"2025-12-03T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.990572 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.990648 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.990666 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.990692 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:53 crc kubenswrapper[4805]: I1203 00:07:53.990712 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:53Z","lastTransitionTime":"2025-12-03T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.093807 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.093882 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.093901 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.093932 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.093957 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:54Z","lastTransitionTime":"2025-12-03T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.209021 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.209116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.209137 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.209168 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.209189 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:54Z","lastTransitionTime":"2025-12-03T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.313116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.313175 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.313188 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.313237 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.313253 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:54Z","lastTransitionTime":"2025-12-03T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.416135 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.416259 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.416288 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.416321 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.416346 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:54Z","lastTransitionTime":"2025-12-03T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.422630 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.422630 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.422804 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:54 crc kubenswrapper[4805]: E1203 00:07:54.423025 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:54 crc kubenswrapper[4805]: E1203 00:07:54.423289 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:54 crc kubenswrapper[4805]: E1203 00:07:54.423692 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.519616 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.519711 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.519739 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.519831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.519906 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:54Z","lastTransitionTime":"2025-12-03T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.624329 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.625296 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.625520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.625657 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.625778 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:54Z","lastTransitionTime":"2025-12-03T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.729395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.729467 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.729481 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.729526 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.729539 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:54Z","lastTransitionTime":"2025-12-03T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.832998 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.833622 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.833779 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.834022 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.834236 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:54Z","lastTransitionTime":"2025-12-03T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.937845 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.937922 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.937950 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.937981 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:54 crc kubenswrapper[4805]: I1203 00:07:54.938004 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:54Z","lastTransitionTime":"2025-12-03T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.040680 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.040747 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.040766 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.040791 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.040811 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:55Z","lastTransitionTime":"2025-12-03T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.144684 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.144774 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.144798 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.144833 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.144856 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:55Z","lastTransitionTime":"2025-12-03T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.248434 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.248494 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.248506 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.248528 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.248544 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:55Z","lastTransitionTime":"2025-12-03T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.352121 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.352195 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.352254 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.352285 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.352306 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:55Z","lastTransitionTime":"2025-12-03T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.422369 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:55 crc kubenswrapper[4805]: E1203 00:07:55.422967 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.456386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.456480 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.456515 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.456550 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.456573 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:55Z","lastTransitionTime":"2025-12-03T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.559500 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.559569 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.559590 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.559624 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.559645 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:55Z","lastTransitionTime":"2025-12-03T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.663734 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.663791 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.663809 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.663834 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.663851 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:55Z","lastTransitionTime":"2025-12-03T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.767142 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.767229 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.767246 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.767268 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.767282 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:55Z","lastTransitionTime":"2025-12-03T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.871597 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.871643 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.871653 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.871670 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.871686 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:55Z","lastTransitionTime":"2025-12-03T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.974708 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.974818 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.974843 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.974888 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:55 crc kubenswrapper[4805]: I1203 00:07:55.974917 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:55Z","lastTransitionTime":"2025-12-03T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.078986 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.079070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.079090 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.079547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.079829 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:56Z","lastTransitionTime":"2025-12-03T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.183599 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.183662 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.183682 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.183711 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.183732 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:56Z","lastTransitionTime":"2025-12-03T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.286890 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.286978 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.287000 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.287031 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.287055 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:56Z","lastTransitionTime":"2025-12-03T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.390039 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.390086 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.390120 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.390137 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.390147 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:56Z","lastTransitionTime":"2025-12-03T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.422876 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:56 crc kubenswrapper[4805]: E1203 00:07:56.423047 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.423371 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.423436 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:56 crc kubenswrapper[4805]: E1203 00:07:56.423449 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:56 crc kubenswrapper[4805]: E1203 00:07:56.424261 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.449051 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9403035d-0ad7-413a-a630-a252fcafb16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"amed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764720388\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764720388\\\\\\\\\\\\\\\" (2025-12-02 23:06:28 +0000 UTC to 2026-12-02 23:06:28 +0000 UTC (now=2025-12-03 00:06:34.336792272 +0000 UTC))\\\\\\\"\\\\nI1203 00:06:34.336848 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1203 00:06:34.336880 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1203 00:06:34.336984 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337016 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1203 00:06:34.337077 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-613176698/tls.crt::/tmp/serving-cert-613176698/tls.key\\\\\\\"\\\\nI1203 00:06:34.337230 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1203 00:06:34.340128 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1203 00:06:34.340154 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1203 00:06:34.340471 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340490 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1203 00:06:34.340510 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1203 00:06:34.340516 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1203 00:06:34.346003 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.464829 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.482543 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lllfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"839326a5-41df-492f-83c4-3ee9e2964dc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00da5927fb0861f1f7e960b661c25cc0fe480a8e82b8708f9d7df8b11ac130fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:23Z\\\",\\\"message\\\":\\\"2025-12-03T00:06:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b185583-eef9-44d6-bd2e-697ba2a0e55c\\\\n2025-12-03T00:06:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b185583-eef9-44d6-bd2e-697ba2a0e55c to /host/opt/cni/bin/\\\\n2025-12-03T00:06:38Z [verbose] multus-daemon started\\\\n2025-12-03T00:06:38Z [verbose] Readiness Indicator file check\\\\n2025-12-03T00:07:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvffg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lllfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.493864 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.494019 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.494053 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.494082 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.494100 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:56Z","lastTransitionTime":"2025-12-03T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.510364 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2dbad567-2c97-49dd-ac90-41fd66a3b606\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T00:07:35Z\\\",\\\"message\\\":\\\"00:07:34.654076 6802 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1203 00:07:34.655613 6802 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1203 00:07:34.655620 6802 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1203 00:07:34.655637 6802 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI1203 00:07:34.655732 6802 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 00:07:34.654495 6802 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T00:07:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k6pk5_openshift-ovn-kubernetes(2dbad567-2c97-49dd-ac90-41fd66a3b606)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6mm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k6pk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.532824 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69efbdeddaf41143ad384d9f4bcc549d85ff2c9e04ca5d6773679901640ede1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.550240 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31200492e05c43b7820c0a7534b72d893b986b1611fa96e003b19e71bbeb1f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://422033d1e544c919f78504d744bb3d07ff5e6ef26ac467a2ad17f78bdfc115e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.566461 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.579367 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-29wnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60be52b4-e3f7-4b20-b854-5521ee573c09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34433443ba35d5e5274b04679c1dceaea50a315170ff526d55196928ae443da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g78bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-29wnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.589998 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"595dab30-121e-47a9-9d14-b3e0658b0da2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52b9bd6a2ff9fb5775064416b40e7c898532700f85cd1f6fc749af0fb3618454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a605790bb8690284503c4ff0054b3bbd6f81ad0822fa614d1bb0f7e476c2211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a605790bb8690284503c4ff0054b3bbd6f81ad0822fa614d1bb0f7e476c2211\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.598775 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.598846 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.598866 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.598897 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.598918 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:56Z","lastTransitionTime":"2025-12-03T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.604955 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501e095c-82af-40ed-b50e-88fd4f43f048\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3099f249ff12ad2a1f9db8e46832fb73045ba48f55fba34260774a4ea301d31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://944eb6f29da04a2b3d42f9da204ebc535167d9a0ddd223136f4c2e6c210bec6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://875e17a4c7ad76eede7fe4050b9ef90de4168c4b628410f59c548ae8f3d3419f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.621651 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6265ed86-69ce-4136-85a8-9b7e3801d5bf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10721a21c16e0eb75e863b08e07a09c628f5e1f6ae17d60c3c65010188d04ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c67b169b1480f166cc6166b80f1371fda8965fcd58b25153a1d3befd74af36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e723695c2c26ea7f587aada1b84da8caf1fba5ebbaf8b5665de0c33f8c0d02ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d03bb7eb1194dc74bbe5082832d3c18c3d3a7719c447300fcf79d76ebfc14d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.637525 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.654871 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0524f294dce41467ae5081b8ad64b9107c476c916f0665b8ca1f4ee05ecc4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.673451 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qh9zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c3c462d-b6c9-48b0-b2b7-a03ae311d7c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://552e57866f7845b51d56c272ecbe60c569ef9b79a5df387ea2b4fa2f5c0e02d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zxz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qh9zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.690613 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a3f0b73-6cf4-4477-a7fd-0627f96339ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1aeebcab06f1b267bc2e583a2fd8f15328de9c1767ba1d26c75641699f41e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71768f7cf0016d2b7a206f0536cdecd4d112e64661010318877f2601b9ebbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlknv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x476n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.702391 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.702442 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.702454 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.702473 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.702484 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:56Z","lastTransitionTime":"2025-12-03T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.705834 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829c74e-7807-4b31-9b2a-2482ec95a235\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjzpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q4nqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.736802 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc4b06b-d861-46cb-82b9-ac688b8257c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb60031f96573fe2fd8d267fb9b7c69069c1a5152cdf5ce310cf37e5e9915c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbdc40ebb82a9380771c4ebc8593d2cd834b1ae54ca9bf2846f494a6f762f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009c372e01d804bea50972dc34bae1b828bc74e2680963e8ffdc7a9940793085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0a576bd6902afc8f56e6618026c941e893324be54cf4ae50f68ebb12f0c1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6df4965661e7359e63d87e10210d53c45a93569ac97c5f6d1356611b0359c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9dfaab1ffe0535ac904d602370f58939b6390b5ee061ab1646475ea5382db5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67de85f92fa2aafa9a8621d86f89db7d1941004db6e44b5efc18794e3e82e502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a739e2b7fc62917feb10899021ff3ed11051b396d28ee4ca5982abe01b1eb7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.752761 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42d6da4d-d781-4243-b5c3-28a8cf91ef53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359647d079c0a7f4ef57ef9c1a2a8ee93ad75a301f0cdb6d74d6007a830308d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlfx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dd5rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.774512 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pggh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64f3e4fa-808d-4e25-a03e-be11b8a1bcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892ecd26fa4a60929fdf5b24ff9f6517c0df3b88ff8c8cfac39c544ce97e65fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3792edc2b362db40d715f132b0e78585bce71e5ec3a98e64c30a0d2054cf4b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://066ab320329cb4e56a605a04a94376f8d47d79e27c23729d9b4c3475d1a7b73d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a3cdc3b9b715451d12abd31b0632a5a66919a3d2ccb97cb3884a8822733132\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6b7cc937afb1167d9415f00a8d4b1e82e2af48995acf55262ad6e70384d9892\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9559613bcce12fd9f066791f0b137de11b2e6f00580b0434ad8db82e7c4352a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cd892c7552ef1497c38fd2824b7ad4120dcb0498b6b004ee4e7a564887739bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T00:06:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95vt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T00:06:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pggh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T00:07:56Z is after 2025-08-24T17:21:41Z" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.804951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.805004 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.805024 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.805042 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.805057 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:56Z","lastTransitionTime":"2025-12-03T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.907253 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.907303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.907317 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.907336 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:56 crc kubenswrapper[4805]: I1203 00:07:56.907350 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:56Z","lastTransitionTime":"2025-12-03T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.009925 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.009986 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.010007 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.010038 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.010058 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:57Z","lastTransitionTime":"2025-12-03T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.113391 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.113456 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.113473 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.113498 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.113514 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:57Z","lastTransitionTime":"2025-12-03T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.218109 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.218227 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.218257 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.218288 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.218309 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:57Z","lastTransitionTime":"2025-12-03T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.321421 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.321494 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.321522 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.321559 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.321585 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:57Z","lastTransitionTime":"2025-12-03T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.423127 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:57 crc kubenswrapper[4805]: E1203 00:07:57.423470 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.424884 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.424947 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.424959 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.424977 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.424989 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:57Z","lastTransitionTime":"2025-12-03T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.529310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.529390 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.529409 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.529438 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.529458 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:57Z","lastTransitionTime":"2025-12-03T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.632273 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.632341 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.632353 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.632370 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.632382 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:57Z","lastTransitionTime":"2025-12-03T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.736433 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.736506 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.736518 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.736540 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.736553 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:57Z","lastTransitionTime":"2025-12-03T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.839696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.839745 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.839759 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.839780 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.839794 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:57Z","lastTransitionTime":"2025-12-03T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.943731 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.943789 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.943801 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.943821 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:57 crc kubenswrapper[4805]: I1203 00:07:57.943877 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:57Z","lastTransitionTime":"2025-12-03T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.046379 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.046425 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.046438 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.046455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.046466 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:58Z","lastTransitionTime":"2025-12-03T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.148741 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.148782 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.148792 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.148806 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.148816 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:58Z","lastTransitionTime":"2025-12-03T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.252975 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.253073 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.253109 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.253145 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.253173 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:58Z","lastTransitionTime":"2025-12-03T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.356625 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.356705 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.356723 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.356751 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.356773 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:58Z","lastTransitionTime":"2025-12-03T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.423094 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.423177 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:07:58 crc kubenswrapper[4805]: E1203 00:07:58.423389 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.423136 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:07:58 crc kubenswrapper[4805]: E1203 00:07:58.423536 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:07:58 crc kubenswrapper[4805]: E1203 00:07:58.423728 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.460575 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.460687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.460706 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.460736 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.460759 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:58Z","lastTransitionTime":"2025-12-03T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.563378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.563443 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.563462 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.563485 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.563501 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:58Z","lastTransitionTime":"2025-12-03T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.667013 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.667091 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.667115 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.667148 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.667170 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:58Z","lastTransitionTime":"2025-12-03T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.770813 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.771007 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.771046 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.771086 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.771105 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:58Z","lastTransitionTime":"2025-12-03T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.874874 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.875000 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.875021 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.875057 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.875080 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:58Z","lastTransitionTime":"2025-12-03T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.977377 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.977421 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.977430 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.977446 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:58 crc kubenswrapper[4805]: I1203 00:07:58.977456 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:58Z","lastTransitionTime":"2025-12-03T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.080813 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.081266 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.081407 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.081523 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.081615 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:59Z","lastTransitionTime":"2025-12-03T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.184735 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.184855 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.184874 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.184903 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.184921 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:59Z","lastTransitionTime":"2025-12-03T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.288120 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.288239 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.288266 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.288297 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.288328 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:59Z","lastTransitionTime":"2025-12-03T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.391059 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.391102 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.391113 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.391126 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.391136 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:59Z","lastTransitionTime":"2025-12-03T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.422356 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:07:59 crc kubenswrapper[4805]: E1203 00:07:59.422628 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.494132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.494269 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.494293 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.494325 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.494349 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:59Z","lastTransitionTime":"2025-12-03T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.596951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.597052 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.597071 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.597104 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.597126 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:59Z","lastTransitionTime":"2025-12-03T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.699495 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.699527 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.699534 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.699547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.699558 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:59Z","lastTransitionTime":"2025-12-03T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.801943 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.801998 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.802012 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.802034 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.802047 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:59Z","lastTransitionTime":"2025-12-03T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.905495 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.905548 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.905560 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.905577 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:07:59 crc kubenswrapper[4805]: I1203 00:07:59.905589 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:07:59Z","lastTransitionTime":"2025-12-03T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.008698 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.008766 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.008780 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.008801 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.008819 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:08:00Z","lastTransitionTime":"2025-12-03T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.110724 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.111107 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.111347 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.111370 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.111382 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:08:00Z","lastTransitionTime":"2025-12-03T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.135675 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.135747 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.135760 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.135778 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.135824 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T00:08:00Z","lastTransitionTime":"2025-12-03T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.165439 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-wnkm7"] Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.165904 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wnkm7" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.168510 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.169227 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.169254 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.169593 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.217939 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-29wnh" podStartSLOduration=85.217875749 podStartE2EDuration="1m25.217875749s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:00.216954996 +0000 UTC m=+104.065917602" watchObservedRunningTime="2025-12-03 00:08:00.217875749 +0000 UTC m=+104.066838375" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.224707 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bec7f5de-0153-48a4-8e50-12984566dafb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wnkm7\" (UID: \"bec7f5de-0153-48a4-8e50-12984566dafb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wnkm7" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.224781 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bec7f5de-0153-48a4-8e50-12984566dafb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wnkm7\" (UID: \"bec7f5de-0153-48a4-8e50-12984566dafb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wnkm7" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.224810 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bec7f5de-0153-48a4-8e50-12984566dafb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wnkm7\" (UID: \"bec7f5de-0153-48a4-8e50-12984566dafb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wnkm7" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.224836 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bec7f5de-0153-48a4-8e50-12984566dafb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wnkm7\" (UID: \"bec7f5de-0153-48a4-8e50-12984566dafb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wnkm7" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.224858 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bec7f5de-0153-48a4-8e50-12984566dafb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wnkm7\" (UID: \"bec7f5de-0153-48a4-8e50-12984566dafb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wnkm7" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.266275 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=86.26624977 podStartE2EDuration="1m26.26624977s" podCreationTimestamp="2025-12-03 00:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:00.252674998 +0000 UTC m=+104.101637614" watchObservedRunningTime="2025-12-03 00:08:00.26624977 +0000 UTC m=+104.115212376" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.280163 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=58.280157871 podStartE2EDuration="58.280157871s" podCreationTimestamp="2025-12-03 00:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:00.266573119 +0000 UTC m=+104.115535735" watchObservedRunningTime="2025-12-03 00:08:00.280157871 +0000 UTC m=+104.129120477" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.325797 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bec7f5de-0153-48a4-8e50-12984566dafb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wnkm7\" (UID: \"bec7f5de-0153-48a4-8e50-12984566dafb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wnkm7" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.325852 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bec7f5de-0153-48a4-8e50-12984566dafb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wnkm7\" (UID: \"bec7f5de-0153-48a4-8e50-12984566dafb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wnkm7" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.325876 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bec7f5de-0153-48a4-8e50-12984566dafb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wnkm7\" (UID: \"bec7f5de-0153-48a4-8e50-12984566dafb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wnkm7" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.325901 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bec7f5de-0153-48a4-8e50-12984566dafb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wnkm7\" (UID: \"bec7f5de-0153-48a4-8e50-12984566dafb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wnkm7" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.325961 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bec7f5de-0153-48a4-8e50-12984566dafb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wnkm7\" (UID: \"bec7f5de-0153-48a4-8e50-12984566dafb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wnkm7" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.326057 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bec7f5de-0153-48a4-8e50-12984566dafb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wnkm7\" (UID: \"bec7f5de-0153-48a4-8e50-12984566dafb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wnkm7" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.326164 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bec7f5de-0153-48a4-8e50-12984566dafb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wnkm7\" (UID: \"bec7f5de-0153-48a4-8e50-12984566dafb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wnkm7" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.326880 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bec7f5de-0153-48a4-8e50-12984566dafb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wnkm7\" (UID: \"bec7f5de-0153-48a4-8e50-12984566dafb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wnkm7" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.330602 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-qh9zq" podStartSLOduration=85.330581644 podStartE2EDuration="1m25.330581644s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:00.313872322 +0000 UTC m=+104.162834918" watchObservedRunningTime="2025-12-03 00:08:00.330581644 +0000 UTC m=+104.179544250" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.330795 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x476n" podStartSLOduration=84.330788409 podStartE2EDuration="1m24.330788409s" podCreationTimestamp="2025-12-03 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:00.330527593 +0000 UTC m=+104.179490209" watchObservedRunningTime="2025-12-03 00:08:00.330788409 +0000 UTC m=+104.179751005" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.344835 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bec7f5de-0153-48a4-8e50-12984566dafb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wnkm7\" (UID: \"bec7f5de-0153-48a4-8e50-12984566dafb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wnkm7" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.344980 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bec7f5de-0153-48a4-8e50-12984566dafb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wnkm7\" (UID: \"bec7f5de-0153-48a4-8e50-12984566dafb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wnkm7" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.372999 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=32.372972916 podStartE2EDuration="32.372972916s" podCreationTimestamp="2025-12-03 00:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:00.360314446 +0000 UTC m=+104.209277062" watchObservedRunningTime="2025-12-03 00:08:00.372972916 +0000 UTC m=+104.221935522" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.373272 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podStartSLOduration=85.373266243 podStartE2EDuration="1m25.373266243s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:00.372720878 +0000 UTC m=+104.221683484" watchObservedRunningTime="2025-12-03 00:08:00.373266243 +0000 UTC m=+104.222228859" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.388671 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-6pggh" podStartSLOduration=85.388652751 podStartE2EDuration="1m25.388652751s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:00.38860903 +0000 UTC m=+104.237571646" watchObservedRunningTime="2025-12-03 00:08:00.388652751 +0000 UTC m=+104.237615357" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.411883 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=82.411868197 podStartE2EDuration="1m22.411868197s" podCreationTimestamp="2025-12-03 00:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:00.411710993 +0000 UTC m=+104.260673619" watchObservedRunningTime="2025-12-03 00:08:00.411868197 +0000 UTC m=+104.260830803" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.423066 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:00 crc kubenswrapper[4805]: E1203 00:08:00.423245 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.423090 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.423066 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:00 crc kubenswrapper[4805]: E1203 00:08:00.423577 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:08:00 crc kubenswrapper[4805]: E1203 00:08:00.423740 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.423905 4805 scope.go:117] "RemoveContainer" containerID="9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0" Dec 03 00:08:00 crc kubenswrapper[4805]: E1203 00:08:00.424075 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k6pk5_openshift-ovn-kubernetes(2dbad567-2c97-49dd-ac90-41fd66a3b606)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.464104 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-lllfh" podStartSLOduration=85.46407936599999 podStartE2EDuration="1m25.464079366s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:00.43812776 +0000 UTC m=+104.287090386" watchObservedRunningTime="2025-12-03 00:08:00.464079366 +0000 UTC m=+104.313041982" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.481400 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wnkm7" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.497828 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=86.497576452 podStartE2EDuration="1m26.497576452s" podCreationTimestamp="2025-12-03 00:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:00.496667018 +0000 UTC m=+104.345629654" watchObservedRunningTime="2025-12-03 00:08:00.497576452 +0000 UTC m=+104.346539078" Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.994060 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wnkm7" event={"ID":"bec7f5de-0153-48a4-8e50-12984566dafb","Type":"ContainerStarted","Data":"591f1c3e052bde04d24ec3d67fe284a8bbc995e8e5733e8ef8a5efd51dfcf51c"} Dec 03 00:08:00 crc kubenswrapper[4805]: I1203 00:08:00.994168 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wnkm7" event={"ID":"bec7f5de-0153-48a4-8e50-12984566dafb","Type":"ContainerStarted","Data":"4f4dc75aadc86f25a5c8662f9fd1991297308a4b0eba87749f5d6f9f7763395b"} Dec 03 00:08:01 crc kubenswrapper[4805]: I1203 00:08:01.010366 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wnkm7" podStartSLOduration=86.010332589 podStartE2EDuration="1m26.010332589s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:01.007749263 +0000 UTC m=+104.856711879" watchObservedRunningTime="2025-12-03 00:08:01.010332589 +0000 UTC m=+104.859295265" Dec 03 00:08:01 crc kubenswrapper[4805]: I1203 00:08:01.422836 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:08:01 crc kubenswrapper[4805]: E1203 00:08:01.423120 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:08:02 crc kubenswrapper[4805]: I1203 00:08:02.422405 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:02 crc kubenswrapper[4805]: I1203 00:08:02.422474 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:02 crc kubenswrapper[4805]: I1203 00:08:02.422530 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:02 crc kubenswrapper[4805]: E1203 00:08:02.422549 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:08:02 crc kubenswrapper[4805]: E1203 00:08:02.422628 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:08:02 crc kubenswrapper[4805]: E1203 00:08:02.422714 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:08:03 crc kubenswrapper[4805]: I1203 00:08:03.422827 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:08:03 crc kubenswrapper[4805]: E1203 00:08:03.422988 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:08:04 crc kubenswrapper[4805]: I1203 00:08:04.423372 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:04 crc kubenswrapper[4805]: I1203 00:08:04.423427 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:04 crc kubenswrapper[4805]: I1203 00:08:04.423444 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:04 crc kubenswrapper[4805]: E1203 00:08:04.423548 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:08:04 crc kubenswrapper[4805]: E1203 00:08:04.423638 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:08:04 crc kubenswrapper[4805]: E1203 00:08:04.423712 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:08:05 crc kubenswrapper[4805]: I1203 00:08:05.422557 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:08:05 crc kubenswrapper[4805]: E1203 00:08:05.422781 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:08:06 crc kubenswrapper[4805]: I1203 00:08:06.422750 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:06 crc kubenswrapper[4805]: I1203 00:08:06.422804 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:06 crc kubenswrapper[4805]: E1203 00:08:06.424747 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:08:06 crc kubenswrapper[4805]: I1203 00:08:06.424765 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:06 crc kubenswrapper[4805]: E1203 00:08:06.424841 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:08:06 crc kubenswrapper[4805]: E1203 00:08:06.424918 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:08:07 crc kubenswrapper[4805]: I1203 00:08:07.422864 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:08:07 crc kubenswrapper[4805]: E1203 00:08:07.423036 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:08:08 crc kubenswrapper[4805]: I1203 00:08:08.422841 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:08 crc kubenswrapper[4805]: I1203 00:08:08.422884 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:08 crc kubenswrapper[4805]: E1203 00:08:08.423042 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:08:08 crc kubenswrapper[4805]: I1203 00:08:08.423091 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:08 crc kubenswrapper[4805]: E1203 00:08:08.423238 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:08:08 crc kubenswrapper[4805]: E1203 00:08:08.423451 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:08:09 crc kubenswrapper[4805]: I1203 00:08:09.422375 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:08:09 crc kubenswrapper[4805]: E1203 00:08:09.422785 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:08:10 crc kubenswrapper[4805]: I1203 00:08:10.029915 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lllfh_839326a5-41df-492f-83c4-3ee9e2964dc8/kube-multus/1.log" Dec 03 00:08:10 crc kubenswrapper[4805]: I1203 00:08:10.030938 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lllfh_839326a5-41df-492f-83c4-3ee9e2964dc8/kube-multus/0.log" Dec 03 00:08:10 crc kubenswrapper[4805]: I1203 00:08:10.031003 4805 generic.go:334] "Generic (PLEG): container finished" podID="839326a5-41df-492f-83c4-3ee9e2964dc8" containerID="00da5927fb0861f1f7e960b661c25cc0fe480a8e82b8708f9d7df8b11ac130fe" exitCode=1 Dec 03 00:08:10 crc kubenswrapper[4805]: I1203 00:08:10.031046 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lllfh" event={"ID":"839326a5-41df-492f-83c4-3ee9e2964dc8","Type":"ContainerDied","Data":"00da5927fb0861f1f7e960b661c25cc0fe480a8e82b8708f9d7df8b11ac130fe"} Dec 03 00:08:10 crc kubenswrapper[4805]: I1203 00:08:10.031090 4805 scope.go:117] "RemoveContainer" containerID="861ed50bfa28fdc38e1d233f86ef76c23687d10f7418b3d95ad00c67b8515ad1" Dec 03 00:08:10 crc kubenswrapper[4805]: I1203 00:08:10.031600 4805 scope.go:117] "RemoveContainer" containerID="00da5927fb0861f1f7e960b661c25cc0fe480a8e82b8708f9d7df8b11ac130fe" Dec 03 00:08:10 crc kubenswrapper[4805]: E1203 00:08:10.031867 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-lllfh_openshift-multus(839326a5-41df-492f-83c4-3ee9e2964dc8)\"" pod="openshift-multus/multus-lllfh" podUID="839326a5-41df-492f-83c4-3ee9e2964dc8" Dec 03 00:08:10 crc kubenswrapper[4805]: I1203 00:08:10.422840 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:10 crc kubenswrapper[4805]: E1203 00:08:10.423029 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:08:10 crc kubenswrapper[4805]: I1203 00:08:10.423395 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:10 crc kubenswrapper[4805]: E1203 00:08:10.423478 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:08:10 crc kubenswrapper[4805]: I1203 00:08:10.423593 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:10 crc kubenswrapper[4805]: E1203 00:08:10.423780 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:08:11 crc kubenswrapper[4805]: I1203 00:08:11.037165 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lllfh_839326a5-41df-492f-83c4-3ee9e2964dc8/kube-multus/1.log" Dec 03 00:08:11 crc kubenswrapper[4805]: I1203 00:08:11.422946 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:08:11 crc kubenswrapper[4805]: E1203 00:08:11.423304 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:08:12 crc kubenswrapper[4805]: I1203 00:08:12.423316 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:12 crc kubenswrapper[4805]: I1203 00:08:12.423752 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:12 crc kubenswrapper[4805]: I1203 00:08:12.424258 4805 scope.go:117] "RemoveContainer" containerID="9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0" Dec 03 00:08:12 crc kubenswrapper[4805]: E1203 00:08:12.424759 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-k6pk5_openshift-ovn-kubernetes(2dbad567-2c97-49dd-ac90-41fd66a3b606)\"" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" Dec 03 00:08:12 crc kubenswrapper[4805]: E1203 00:08:12.424955 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:08:12 crc kubenswrapper[4805]: E1203 00:08:12.425087 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:08:12 crc kubenswrapper[4805]: I1203 00:08:12.425364 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:12 crc kubenswrapper[4805]: E1203 00:08:12.425509 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:08:13 crc kubenswrapper[4805]: I1203 00:08:13.423227 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:08:13 crc kubenswrapper[4805]: E1203 00:08:13.423410 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:08:14 crc kubenswrapper[4805]: I1203 00:08:14.422924 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:14 crc kubenswrapper[4805]: I1203 00:08:14.422994 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:14 crc kubenswrapper[4805]: I1203 00:08:14.422953 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:14 crc kubenswrapper[4805]: E1203 00:08:14.423099 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:08:14 crc kubenswrapper[4805]: E1203 00:08:14.423179 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:08:14 crc kubenswrapper[4805]: E1203 00:08:14.423294 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:08:15 crc kubenswrapper[4805]: I1203 00:08:15.423036 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:08:15 crc kubenswrapper[4805]: E1203 00:08:15.423188 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:08:16 crc kubenswrapper[4805]: I1203 00:08:16.423487 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:16 crc kubenswrapper[4805]: I1203 00:08:16.423568 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:16 crc kubenswrapper[4805]: E1203 00:08:16.424737 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:08:16 crc kubenswrapper[4805]: I1203 00:08:16.424781 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:16 crc kubenswrapper[4805]: E1203 00:08:16.424882 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:08:16 crc kubenswrapper[4805]: E1203 00:08:16.424987 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:08:16 crc kubenswrapper[4805]: E1203 00:08:16.425975 4805 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 03 00:08:16 crc kubenswrapper[4805]: E1203 00:08:16.521098 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 00:08:17 crc kubenswrapper[4805]: I1203 00:08:17.423219 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:08:17 crc kubenswrapper[4805]: E1203 00:08:17.423368 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:08:18 crc kubenswrapper[4805]: I1203 00:08:18.423158 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:18 crc kubenswrapper[4805]: I1203 00:08:18.423251 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:18 crc kubenswrapper[4805]: I1203 00:08:18.423175 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:18 crc kubenswrapper[4805]: E1203 00:08:18.423413 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:08:18 crc kubenswrapper[4805]: E1203 00:08:18.423570 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:08:18 crc kubenswrapper[4805]: E1203 00:08:18.423662 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:08:19 crc kubenswrapper[4805]: I1203 00:08:19.423310 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:08:19 crc kubenswrapper[4805]: E1203 00:08:19.423479 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:08:20 crc kubenswrapper[4805]: I1203 00:08:20.423061 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:20 crc kubenswrapper[4805]: E1203 00:08:20.423238 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:08:20 crc kubenswrapper[4805]: I1203 00:08:20.423485 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:20 crc kubenswrapper[4805]: E1203 00:08:20.423547 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:08:20 crc kubenswrapper[4805]: I1203 00:08:20.423692 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:20 crc kubenswrapper[4805]: E1203 00:08:20.423755 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:08:20 crc kubenswrapper[4805]: I1203 00:08:20.424226 4805 scope.go:117] "RemoveContainer" containerID="00da5927fb0861f1f7e960b661c25cc0fe480a8e82b8708f9d7df8b11ac130fe" Dec 03 00:08:21 crc kubenswrapper[4805]: I1203 00:08:21.071858 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lllfh_839326a5-41df-492f-83c4-3ee9e2964dc8/kube-multus/1.log" Dec 03 00:08:21 crc kubenswrapper[4805]: I1203 00:08:21.072225 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lllfh" event={"ID":"839326a5-41df-492f-83c4-3ee9e2964dc8","Type":"ContainerStarted","Data":"33dd044a1c452ea0329e56530bf4040c12943c54a9f1455b5eda6d0509b05c15"} Dec 03 00:08:21 crc kubenswrapper[4805]: I1203 00:08:21.422473 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:08:21 crc kubenswrapper[4805]: E1203 00:08:21.422926 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:08:21 crc kubenswrapper[4805]: E1203 00:08:21.523018 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 00:08:22 crc kubenswrapper[4805]: I1203 00:08:22.422645 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:22 crc kubenswrapper[4805]: I1203 00:08:22.422655 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:22 crc kubenswrapper[4805]: E1203 00:08:22.422867 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:08:22 crc kubenswrapper[4805]: I1203 00:08:22.422680 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:22 crc kubenswrapper[4805]: E1203 00:08:22.422959 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:08:22 crc kubenswrapper[4805]: E1203 00:08:22.423042 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:08:23 crc kubenswrapper[4805]: I1203 00:08:23.424834 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:08:23 crc kubenswrapper[4805]: E1203 00:08:23.425180 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:08:23 crc kubenswrapper[4805]: I1203 00:08:23.426442 4805 scope.go:117] "RemoveContainer" containerID="9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0" Dec 03 00:08:24 crc kubenswrapper[4805]: I1203 00:08:24.084007 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6pk5_2dbad567-2c97-49dd-ac90-41fd66a3b606/ovnkube-controller/3.log" Dec 03 00:08:24 crc kubenswrapper[4805]: I1203 00:08:24.087593 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerStarted","Data":"396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496"} Dec 03 00:08:24 crc kubenswrapper[4805]: I1203 00:08:24.088424 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:08:24 crc kubenswrapper[4805]: I1203 00:08:24.114127 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" podStartSLOduration=109.114099581 podStartE2EDuration="1m49.114099581s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:24.113828844 +0000 UTC m=+127.962791490" watchObservedRunningTime="2025-12-03 00:08:24.114099581 +0000 UTC m=+127.963062197" Dec 03 00:08:24 crc kubenswrapper[4805]: I1203 00:08:24.211968 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-q4nqx"] Dec 03 00:08:24 crc kubenswrapper[4805]: I1203 00:08:24.212189 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:08:24 crc kubenswrapper[4805]: E1203 00:08:24.212464 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:08:24 crc kubenswrapper[4805]: I1203 00:08:24.422482 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:24 crc kubenswrapper[4805]: I1203 00:08:24.422597 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:24 crc kubenswrapper[4805]: E1203 00:08:24.422628 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:08:24 crc kubenswrapper[4805]: I1203 00:08:24.422654 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:24 crc kubenswrapper[4805]: E1203 00:08:24.422759 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:08:24 crc kubenswrapper[4805]: E1203 00:08:24.422834 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:08:26 crc kubenswrapper[4805]: I1203 00:08:26.422769 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:26 crc kubenswrapper[4805]: I1203 00:08:26.422839 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:26 crc kubenswrapper[4805]: I1203 00:08:26.422886 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:26 crc kubenswrapper[4805]: I1203 00:08:26.422965 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:08:26 crc kubenswrapper[4805]: E1203 00:08:26.425935 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:08:26 crc kubenswrapper[4805]: E1203 00:08:26.426137 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:08:26 crc kubenswrapper[4805]: E1203 00:08:26.426267 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:08:26 crc kubenswrapper[4805]: E1203 00:08:26.426385 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:08:26 crc kubenswrapper[4805]: E1203 00:08:26.524057 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 00:08:28 crc kubenswrapper[4805]: I1203 00:08:28.422813 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:28 crc kubenswrapper[4805]: I1203 00:08:28.422842 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:28 crc kubenswrapper[4805]: E1203 00:08:28.423661 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:08:28 crc kubenswrapper[4805]: I1203 00:08:28.422843 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:28 crc kubenswrapper[4805]: I1203 00:08:28.422849 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:08:28 crc kubenswrapper[4805]: E1203 00:08:28.423765 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:08:28 crc kubenswrapper[4805]: E1203 00:08:28.423795 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:08:28 crc kubenswrapper[4805]: E1203 00:08:28.423924 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:08:30 crc kubenswrapper[4805]: I1203 00:08:30.422318 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:30 crc kubenswrapper[4805]: I1203 00:08:30.422375 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:30 crc kubenswrapper[4805]: I1203 00:08:30.422436 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:30 crc kubenswrapper[4805]: E1203 00:08:30.422596 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 00:08:30 crc kubenswrapper[4805]: I1203 00:08:30.422637 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:08:30 crc kubenswrapper[4805]: E1203 00:08:30.422699 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 00:08:30 crc kubenswrapper[4805]: E1203 00:08:30.422768 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q4nqx" podUID="3829c74e-7807-4b31-9b2a-2482ec95a235" Dec 03 00:08:30 crc kubenswrapper[4805]: E1203 00:08:30.423003 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 00:08:32 crc kubenswrapper[4805]: I1203 00:08:32.422454 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:32 crc kubenswrapper[4805]: I1203 00:08:32.422453 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:32 crc kubenswrapper[4805]: I1203 00:08:32.422470 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:08:32 crc kubenswrapper[4805]: I1203 00:08:32.422470 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:32 crc kubenswrapper[4805]: I1203 00:08:32.425393 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 00:08:32 crc kubenswrapper[4805]: I1203 00:08:32.425590 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 00:08:32 crc kubenswrapper[4805]: I1203 00:08:32.425747 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 00:08:32 crc kubenswrapper[4805]: I1203 00:08:32.425883 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 03 00:08:32 crc kubenswrapper[4805]: I1203 00:08:32.426030 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 03 00:08:32 crc kubenswrapper[4805]: I1203 00:08:32.427746 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.633105 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.666422 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d6jmb"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.667078 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.669025 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.669481 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.669573 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k8744"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.669904 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.670621 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gwpvv"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.670983 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gwpvv" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.671577 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.671909 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.676823 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.677189 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.677940 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.678311 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.678533 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.678737 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.678990 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.680160 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.680800 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mq2pr"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.681166 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-pq2qs"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.681499 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.681711 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mq2pr" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.682023 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pq2qs" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.684581 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.684699 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.684772 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.684848 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.684924 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.685365 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pt5zn"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.686011 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pt5zn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.686276 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.686480 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.686581 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.686680 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.686689 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.686714 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.687855 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.688271 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.688392 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.688530 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.688697 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.688729 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.688918 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.689079 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.700093 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.704977 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.708628 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.724805 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.725438 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.725670 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.725914 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.726153 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.726438 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.726748 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.726822 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.726877 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.727026 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.727157 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.727256 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.727354 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.727371 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.727410 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.727474 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.727509 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.726758 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.727082 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.727681 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.727771 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.727800 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.727730 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sz46g"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.726805 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.727757 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.727757 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.728361 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wzx9r"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.728668 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-kn8zr"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.728900 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29412000-pvwqn"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.729075 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.729244 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29412000-pvwqn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.729508 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wzx9r" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.729568 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-hqrgj"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.729613 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-kn8zr" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.729981 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hqrgj" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.731573 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.732527 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5fd7z"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.733008 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-slg54"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.734252 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5fd7z" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.737301 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-njft5"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.737541 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.737629 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.737703 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2lqj"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.737877 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.737982 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2lqj" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.738247 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-njft5" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.738639 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.738766 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.738862 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6ltm8"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.739241 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6ltm8" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.740006 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.740171 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.741675 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7sxvt"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.742263 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzrzs"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.742618 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzrzs" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.742935 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7sxvt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.743859 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-27wk7"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.744542 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-27wk7" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.745019 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-499xn"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.745732 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-499xn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.747493 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hsnws"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.747988 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hsnws" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.749479 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.749662 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.750005 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.750096 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.750354 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.750399 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.750485 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.750944 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.751901 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/976c4d52-a36d-43f0-ae70-921f30051080-config\") pod \"route-controller-manager-6576b87f9c-bfzn8\" (UID: \"976c4d52-a36d-43f0-ae70-921f30051080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.751940 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsl5w\" (UniqueName: \"kubernetes.io/projected/467b4db8-19ee-4476-b72a-158547e24884-kube-api-access-wsl5w\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.751959 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6cce2da-692f-432a-99fe-f0340759781d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5fd7z\" (UID: \"a6cce2da-692f-432a-99fe-f0340759781d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5fd7z" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.751975 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dx8l\" (UniqueName: \"kubernetes.io/projected/a6cce2da-692f-432a-99fe-f0340759781d-kube-api-access-5dx8l\") pod \"cluster-image-registry-operator-dc59b4c8b-5fd7z\" (UID: \"a6cce2da-692f-432a-99fe-f0340759781d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5fd7z" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.751994 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e9e2297-dd89-42c0-a954-65ef398b4618-config\") pod \"console-operator-58897d9998-kn8zr\" (UID: \"2e9e2297-dd89-42c0-a954-65ef398b4618\") " pod="openshift-console-operator/console-operator-58897d9998-kn8zr" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.752011 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e608454a-5352-4a77-80cb-5294bd1ae980-etcd-client\") pod \"etcd-operator-b45778765-slg54\" (UID: \"e608454a-5352-4a77-80cb-5294bd1ae980\") " pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.752028 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hdch\" (UniqueName: \"kubernetes.io/projected/eb1a5f57-d662-40f5-96a5-bf9ca852e368-kube-api-access-4hdch\") pod \"cluster-samples-operator-665b6dd947-wzx9r\" (UID: \"eb1a5f57-d662-40f5-96a5-bf9ca852e368\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wzx9r" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.752047 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.752069 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8279cb46-b6c7-4f7e-a572-f52bfecfaada-profile-collector-cert\") pod \"catalog-operator-68c6474976-499xn\" (UID: \"8279cb46-b6c7-4f7e-a572-f52bfecfaada\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-499xn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.752087 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e608454a-5352-4a77-80cb-5294bd1ae980-etcd-ca\") pod \"etcd-operator-b45778765-slg54\" (UID: \"e608454a-5352-4a77-80cb-5294bd1ae980\") " pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.752102 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b43c926-4b2b-4560-874a-25662916e05e-config\") pod \"kube-apiserver-operator-766d6c64bb-6ltm8\" (UID: \"5b43c926-4b2b-4560-874a-25662916e05e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6ltm8" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.752116 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-config\") pod \"controller-manager-879f6c89f-k8744\" (UID: \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.752130 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcd4dd18-f71c-47de-be9e-7648df9eed36-metrics-certs\") pod \"router-default-5444994796-njft5\" (UID: \"dcd4dd18-f71c-47de-be9e-7648df9eed36\") " pod="openshift-ingress/router-default-5444994796-njft5" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.752146 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/976c4d52-a36d-43f0-ae70-921f30051080-client-ca\") pod \"route-controller-manager-6576b87f9c-bfzn8\" (UID: \"976c4d52-a36d-43f0-ae70-921f30051080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.752161 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.752178 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9hkf\" (UniqueName: \"kubernetes.io/projected/2e9e2297-dd89-42c0-a954-65ef398b4618-kube-api-access-q9hkf\") pod \"console-operator-58897d9998-kn8zr\" (UID: \"2e9e2297-dd89-42c0-a954-65ef398b4618\") " pod="openshift-console-operator/console-operator-58897d9998-kn8zr" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.752440 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd4dd18-f71c-47de-be9e-7648df9eed36-service-ca-bundle\") pod \"router-default-5444994796-njft5\" (UID: \"dcd4dd18-f71c-47de-be9e-7648df9eed36\") " pod="openshift-ingress/router-default-5444994796-njft5" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.752631 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.752683 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b43c926-4b2b-4560-874a-25662916e05e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6ltm8\" (UID: \"5b43c926-4b2b-4560-874a-25662916e05e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6ltm8" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.752810 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/daf7d226-4f5d-4112-b02a-9eaae61a6d74-signing-key\") pod \"service-ca-9c57cc56f-hsnws\" (UID: \"daf7d226-4f5d-4112-b02a-9eaae61a6d74\") " pod="openshift-service-ca/service-ca-9c57cc56f-hsnws" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.752841 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvx75\" (UniqueName: \"kubernetes.io/projected/cf793c88-2d49-44e2-b11d-6ef660f25561-kube-api-access-fvx75\") pod \"kube-storage-version-migrator-operator-b67b599dd-gzrzs\" (UID: \"cf793c88-2d49-44e2-b11d-6ef660f25561\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzrzs" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.752866 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj9bg\" (UniqueName: \"kubernetes.io/projected/dcd4dd18-f71c-47de-be9e-7648df9eed36-kube-api-access-rj9bg\") pod \"router-default-5444994796-njft5\" (UID: \"dcd4dd18-f71c-47de-be9e-7648df9eed36\") " pod="openshift-ingress/router-default-5444994796-njft5" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.753103 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-serving-cert\") pod \"controller-manager-879f6c89f-k8744\" (UID: \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.753141 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssgcs\" (UniqueName: \"kubernetes.io/projected/57c96cff-592a-47c8-a038-6bb23bac6aa5-kube-api-access-ssgcs\") pod \"downloads-7954f5f757-hqrgj\" (UID: \"57c96cff-592a-47c8-a038-6bb23bac6aa5\") " pod="openshift-console/downloads-7954f5f757-hqrgj" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.753160 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bctj6\" (UniqueName: \"kubernetes.io/projected/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-kube-api-access-bctj6\") pod \"controller-manager-879f6c89f-k8744\" (UID: \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.753177 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.753252 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf793c88-2d49-44e2-b11d-6ef660f25561-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gzrzs\" (UID: \"cf793c88-2d49-44e2-b11d-6ef660f25561\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzrzs" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.753286 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e9e2297-dd89-42c0-a954-65ef398b4618-trusted-ca\") pod \"console-operator-58897d9998-kn8zr\" (UID: \"2e9e2297-dd89-42c0-a954-65ef398b4618\") " pod="openshift-console-operator/console-operator-58897d9998-kn8zr" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.753290 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.753334 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.753416 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8279cb46-b6c7-4f7e-a572-f52bfecfaada-srv-cert\") pod \"catalog-operator-68c6474976-499xn\" (UID: \"8279cb46-b6c7-4f7e-a572-f52bfecfaada\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-499xn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.753435 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dcd4dd18-f71c-47de-be9e-7648df9eed36-stats-auth\") pod \"router-default-5444994796-njft5\" (UID: \"dcd4dd18-f71c-47de-be9e-7648df9eed36\") " pod="openshift-ingress/router-default-5444994796-njft5" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.753466 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.753485 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/daf7d226-4f5d-4112-b02a-9eaae61a6d74-signing-cabundle\") pod \"service-ca-9c57cc56f-hsnws\" (UID: \"daf7d226-4f5d-4112-b02a-9eaae61a6d74\") " pod="openshift-service-ca/service-ca-9c57cc56f-hsnws" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.753515 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.753534 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf793c88-2d49-44e2-b11d-6ef660f25561-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gzrzs\" (UID: \"cf793c88-2d49-44e2-b11d-6ef660f25561\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzrzs" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.753601 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6cce2da-692f-432a-99fe-f0340759781d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5fd7z\" (UID: \"a6cce2da-692f-432a-99fe-f0340759781d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5fd7z" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.753622 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb1a5f57-d662-40f5-96a5-bf9ca852e368-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wzx9r\" (UID: \"eb1a5f57-d662-40f5-96a5-bf9ca852e368\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wzx9r" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.753681 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-685j2\" (UniqueName: \"kubernetes.io/projected/976c4d52-a36d-43f0-ae70-921f30051080-kube-api-access-685j2\") pod \"route-controller-manager-6576b87f9c-bfzn8\" (UID: \"976c4d52-a36d-43f0-ae70-921f30051080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.753701 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-k8744\" (UID: \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.753717 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.754137 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.754223 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.754289 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.754405 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.754492 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.754629 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.754758 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e608454a-5352-4a77-80cb-5294bd1ae980-serving-cert\") pod \"etcd-operator-b45778765-slg54\" (UID: \"e608454a-5352-4a77-80cb-5294bd1ae980\") " pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.754789 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e9e2297-dd89-42c0-a954-65ef398b4618-serving-cert\") pod \"console-operator-58897d9998-kn8zr\" (UID: \"2e9e2297-dd89-42c0-a954-65ef398b4618\") " pod="openshift-console-operator/console-operator-58897d9998-kn8zr" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.754813 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.754842 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/976c4d52-a36d-43f0-ae70-921f30051080-serving-cert\") pod \"route-controller-manager-6576b87f9c-bfzn8\" (UID: \"976c4d52-a36d-43f0-ae70-921f30051080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.754925 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-audit-policies\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.754944 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6cce2da-692f-432a-99fe-f0340759781d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5fd7z\" (UID: \"a6cce2da-692f-432a-99fe-f0340759781d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5fd7z" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.754967 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.754983 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmhf2\" (UniqueName: \"kubernetes.io/projected/8279cb46-b6c7-4f7e-a572-f52bfecfaada-kube-api-access-gmhf2\") pod \"catalog-operator-68c6474976-499xn\" (UID: \"8279cb46-b6c7-4f7e-a572-f52bfecfaada\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-499xn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.755019 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b43c926-4b2b-4560-874a-25662916e05e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6ltm8\" (UID: \"5b43c926-4b2b-4560-874a-25662916e05e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6ltm8" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.755035 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e608454a-5352-4a77-80cb-5294bd1ae980-etcd-service-ca\") pod \"etcd-operator-b45778765-slg54\" (UID: \"e608454a-5352-4a77-80cb-5294bd1ae980\") " pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.755070 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.755099 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-client-ca\") pod \"controller-manager-879f6c89f-k8744\" (UID: \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.755213 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.755310 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/467b4db8-19ee-4476-b72a-158547e24884-audit-dir\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.755328 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.755339 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.755360 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dcd4dd18-f71c-47de-be9e-7648df9eed36-default-certificate\") pod \"router-default-5444994796-njft5\" (UID: \"dcd4dd18-f71c-47de-be9e-7648df9eed36\") " pod="openshift-ingress/router-default-5444994796-njft5" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.755400 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e608454a-5352-4a77-80cb-5294bd1ae980-config\") pod \"etcd-operator-b45778765-slg54\" (UID: \"e608454a-5352-4a77-80cb-5294bd1ae980\") " pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.755424 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.755445 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4p9m\" (UniqueName: \"kubernetes.io/projected/daf7d226-4f5d-4112-b02a-9eaae61a6d74-kube-api-access-k4p9m\") pod \"service-ca-9c57cc56f-hsnws\" (UID: \"daf7d226-4f5d-4112-b02a-9eaae61a6d74\") " pod="openshift-service-ca/service-ca-9c57cc56f-hsnws" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.755491 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xrrw\" (UniqueName: \"kubernetes.io/projected/e608454a-5352-4a77-80cb-5294bd1ae980-kube-api-access-2xrrw\") pod \"etcd-operator-b45778765-slg54\" (UID: \"e608454a-5352-4a77-80cb-5294bd1ae980\") " pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.763282 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.766437 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.767443 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.768109 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.768795 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.769403 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.769689 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.769880 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.772620 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9lwf7"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.781678 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.781962 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.781584 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.785017 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.785697 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.787482 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.785587 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.787753 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.789782 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.790432 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.790571 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.791600 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.792867 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-fb2qm"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.799130 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.801400 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9lwf7" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.802664 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.803828 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.805737 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.810662 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.813404 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.813596 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kdg2s"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.814093 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d6jmb"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.814171 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.815085 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-642ng"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.815969 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-642ng" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.816910 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-s77db"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.823612 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xk9qc"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.824114 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s77db" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.824594 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dx5d9"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.825082 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6jjf6"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.827146 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xk9qc" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.828902 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-d7x48"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.829323 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dx5d9" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.836046 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jjf6" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.836142 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.836890 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6nkz"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.838029 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-d7x48" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.854841 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.857273 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb48s"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.857323 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/976c4d52-a36d-43f0-ae70-921f30051080-config\") pod \"route-controller-manager-6576b87f9c-bfzn8\" (UID: \"976c4d52-a36d-43f0-ae70-921f30051080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.857381 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/efd736f9-2ca3-40e9-b51a-25f95ff4529c-encryption-config\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858033 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bd54177-64af-4b6a-952a-b1fce803a911-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mq2pr\" (UID: \"5bd54177-64af-4b6a-952a-b1fce803a911\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mq2pr" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858136 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fd2f5bb-569b-42a2-9e7e-b6309d58eec0-config\") pod \"machine-approver-56656f9798-pq2qs\" (UID: \"9fd2f5bb-569b-42a2-9e7e-b6309d58eec0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pq2qs" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858243 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsl5w\" (UniqueName: \"kubernetes.io/projected/467b4db8-19ee-4476-b72a-158547e24884-kube-api-access-wsl5w\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858259 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6nkz" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858289 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e9e2297-dd89-42c0-a954-65ef398b4618-config\") pod \"console-operator-58897d9998-kn8zr\" (UID: \"2e9e2297-dd89-42c0-a954-65ef398b4618\") " pod="openshift-console-operator/console-operator-58897d9998-kn8zr" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858319 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6cce2da-692f-432a-99fe-f0340759781d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5fd7z\" (UID: \"a6cce2da-692f-432a-99fe-f0340759781d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5fd7z" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858342 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dx8l\" (UniqueName: \"kubernetes.io/projected/a6cce2da-692f-432a-99fe-f0340759781d-kube-api-access-5dx8l\") pod \"cluster-image-registry-operator-dc59b4c8b-5fd7z\" (UID: \"a6cce2da-692f-432a-99fe-f0340759781d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5fd7z" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858363 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e608454a-5352-4a77-80cb-5294bd1ae980-etcd-client\") pod \"etcd-operator-b45778765-slg54\" (UID: \"e608454a-5352-4a77-80cb-5294bd1ae980\") " pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858387 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hdch\" (UniqueName: \"kubernetes.io/projected/eb1a5f57-d662-40f5-96a5-bf9ca852e368-kube-api-access-4hdch\") pod \"cluster-samples-operator-665b6dd947-wzx9r\" (UID: \"eb1a5f57-d662-40f5-96a5-bf9ca852e368\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wzx9r" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858415 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858445 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8279cb46-b6c7-4f7e-a572-f52bfecfaada-profile-collector-cert\") pod \"catalog-operator-68c6474976-499xn\" (UID: \"8279cb46-b6c7-4f7e-a572-f52bfecfaada\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-499xn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858467 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thm9p\" (UniqueName: \"kubernetes.io/projected/54f1e878-22b1-43ab-9225-7212ec9633e7-kube-api-access-thm9p\") pod \"image-pruner-29412000-pvwqn\" (UID: \"54f1e878-22b1-43ab-9225-7212ec9633e7\") " pod="openshift-image-registry/image-pruner-29412000-pvwqn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858494 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/efd736f9-2ca3-40e9-b51a-25f95ff4529c-node-pullsecrets\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858516 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/efd736f9-2ca3-40e9-b51a-25f95ff4529c-audit-dir\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858546 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e608454a-5352-4a77-80cb-5294bd1ae980-etcd-ca\") pod \"etcd-operator-b45778765-slg54\" (UID: \"e608454a-5352-4a77-80cb-5294bd1ae980\") " pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858572 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b43c926-4b2b-4560-874a-25662916e05e-config\") pod \"kube-apiserver-operator-766d6c64bb-6ltm8\" (UID: \"5b43c926-4b2b-4560-874a-25662916e05e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6ltm8" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858590 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-config\") pod \"controller-manager-879f6c89f-k8744\" (UID: \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858612 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/efd736f9-2ca3-40e9-b51a-25f95ff4529c-etcd-client\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858640 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/efd736f9-2ca3-40e9-b51a-25f95ff4529c-image-import-ca\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858675 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/976c4d52-a36d-43f0-ae70-921f30051080-client-ca\") pod \"route-controller-manager-6576b87f9c-bfzn8\" (UID: \"976c4d52-a36d-43f0-ae70-921f30051080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858714 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858738 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9hkf\" (UniqueName: \"kubernetes.io/projected/2e9e2297-dd89-42c0-a954-65ef398b4618-kube-api-access-q9hkf\") pod \"console-operator-58897d9998-kn8zr\" (UID: \"2e9e2297-dd89-42c0-a954-65ef398b4618\") " pod="openshift-console-operator/console-operator-58897d9998-kn8zr" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858761 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd4dd18-f71c-47de-be9e-7648df9eed36-service-ca-bundle\") pod \"router-default-5444994796-njft5\" (UID: \"dcd4dd18-f71c-47de-be9e-7648df9eed36\") " pod="openshift-ingress/router-default-5444994796-njft5" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858786 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcd4dd18-f71c-47de-be9e-7648df9eed36-metrics-certs\") pod \"router-default-5444994796-njft5\" (UID: \"dcd4dd18-f71c-47de-be9e-7648df9eed36\") " pod="openshift-ingress/router-default-5444994796-njft5" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858821 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0be7e17-5f33-4b0b-a045-eb4e71c6f1b3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-s2lqj\" (UID: \"d0be7e17-5f33-4b0b-a045-eb4e71c6f1b3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2lqj" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858855 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2409836-005c-4e36-ae98-14a3053117d1-config\") pod \"machine-api-operator-5694c8668f-gwpvv\" (UID: \"b2409836-005c-4e36-ae98-14a3053117d1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwpvv" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858898 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp74f\" (UniqueName: \"kubernetes.io/projected/5bd54177-64af-4b6a-952a-b1fce803a911-kube-api-access-mp74f\") pod \"openshift-apiserver-operator-796bbdcf4f-mq2pr\" (UID: \"5bd54177-64af-4b6a-952a-b1fce803a911\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mq2pr" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858938 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b43c926-4b2b-4560-874a-25662916e05e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6ltm8\" (UID: \"5b43c926-4b2b-4560-874a-25662916e05e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6ltm8" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858966 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/daf7d226-4f5d-4112-b02a-9eaae61a6d74-signing-key\") pod \"service-ca-9c57cc56f-hsnws\" (UID: \"daf7d226-4f5d-4112-b02a-9eaae61a6d74\") " pod="openshift-service-ca/service-ca-9c57cc56f-hsnws" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.858988 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvx75\" (UniqueName: \"kubernetes.io/projected/cf793c88-2d49-44e2-b11d-6ef660f25561-kube-api-access-fvx75\") pod \"kube-storage-version-migrator-operator-b67b599dd-gzrzs\" (UID: \"cf793c88-2d49-44e2-b11d-6ef660f25561\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzrzs" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859020 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efd736f9-2ca3-40e9-b51a-25f95ff4529c-config\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859044 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-serving-cert\") pod \"controller-manager-879f6c89f-k8744\" (UID: \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859073 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj9bg\" (UniqueName: \"kubernetes.io/projected/dcd4dd18-f71c-47de-be9e-7648df9eed36-kube-api-access-rj9bg\") pod \"router-default-5444994796-njft5\" (UID: \"dcd4dd18-f71c-47de-be9e-7648df9eed36\") " pod="openshift-ingress/router-default-5444994796-njft5" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859107 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/54f1e878-22b1-43ab-9225-7212ec9633e7-serviceca\") pod \"image-pruner-29412000-pvwqn\" (UID: \"54f1e878-22b1-43ab-9225-7212ec9633e7\") " pod="openshift-image-registry/image-pruner-29412000-pvwqn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859132 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efd736f9-2ca3-40e9-b51a-25f95ff4529c-serving-cert\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859154 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b51558fc-0e2d-4299-a606-cbbe8992836c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-27wk7\" (UID: \"b51558fc-0e2d-4299-a606-cbbe8992836c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-27wk7" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859180 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtjj6\" (UniqueName: \"kubernetes.io/projected/b51558fc-0e2d-4299-a606-cbbe8992836c-kube-api-access-jtjj6\") pod \"openshift-controller-manager-operator-756b6f6bc6-27wk7\" (UID: \"b51558fc-0e2d-4299-a606-cbbe8992836c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-27wk7" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859222 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssgcs\" (UniqueName: \"kubernetes.io/projected/57c96cff-592a-47c8-a038-6bb23bac6aa5-kube-api-access-ssgcs\") pod \"downloads-7954f5f757-hqrgj\" (UID: \"57c96cff-592a-47c8-a038-6bb23bac6aa5\") " pod="openshift-console/downloads-7954f5f757-hqrgj" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859241 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0be7e17-5f33-4b0b-a045-eb4e71c6f1b3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-s2lqj\" (UID: \"d0be7e17-5f33-4b0b-a045-eb4e71c6f1b3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2lqj" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859266 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88szd\" (UniqueName: \"kubernetes.io/projected/2cd88982-bc5c-4811-9794-b7342f16d887-kube-api-access-88szd\") pod \"control-plane-machine-set-operator-78cbb6b69f-7sxvt\" (UID: \"2cd88982-bc5c-4811-9794-b7342f16d887\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7sxvt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859296 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bctj6\" (UniqueName: \"kubernetes.io/projected/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-kube-api-access-bctj6\") pod \"controller-manager-879f6c89f-k8744\" (UID: \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859323 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859347 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf793c88-2d49-44e2-b11d-6ef660f25561-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gzrzs\" (UID: \"cf793c88-2d49-44e2-b11d-6ef660f25561\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzrzs" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859372 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/742c5936-25e8-4b3f-82ec-e9a0125b855c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859406 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e9e2297-dd89-42c0-a954-65ef398b4618-trusted-ca\") pod \"console-operator-58897d9998-kn8zr\" (UID: \"2e9e2297-dd89-42c0-a954-65ef398b4618\") " pod="openshift-console-operator/console-operator-58897d9998-kn8zr" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859428 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8279cb46-b6c7-4f7e-a572-f52bfecfaada-srv-cert\") pod \"catalog-operator-68c6474976-499xn\" (UID: \"8279cb46-b6c7-4f7e-a572-f52bfecfaada\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-499xn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859454 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dcd4dd18-f71c-47de-be9e-7648df9eed36-stats-auth\") pod \"router-default-5444994796-njft5\" (UID: \"dcd4dd18-f71c-47de-be9e-7648df9eed36\") " pod="openshift-ingress/router-default-5444994796-njft5" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859498 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859520 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bkdp\" (UniqueName: \"kubernetes.io/projected/b2409836-005c-4e36-ae98-14a3053117d1-kube-api-access-2bkdp\") pod \"machine-api-operator-5694c8668f-gwpvv\" (UID: \"b2409836-005c-4e36-ae98-14a3053117d1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwpvv" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859546 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29e6ec47-1ff7-4c33-814a-3e25e3c40a88-service-ca-bundle\") pod \"authentication-operator-69f744f599-pt5zn\" (UID: \"29e6ec47-1ff7-4c33-814a-3e25e3c40a88\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pt5zn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859533 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e9e2297-dd89-42c0-a954-65ef398b4618-config\") pod \"console-operator-58897d9998-kn8zr\" (UID: \"2e9e2297-dd89-42c0-a954-65ef398b4618\") " pod="openshift-console-operator/console-operator-58897d9998-kn8zr" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859569 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/742c5936-25e8-4b3f-82ec-e9a0125b855c-audit-policies\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859672 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/daf7d226-4f5d-4112-b02a-9eaae61a6d74-signing-cabundle\") pod \"service-ca-9c57cc56f-hsnws\" (UID: \"daf7d226-4f5d-4112-b02a-9eaae61a6d74\") " pod="openshift-service-ca/service-ca-9c57cc56f-hsnws" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859773 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51558fc-0e2d-4299-a606-cbbe8992836c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-27wk7\" (UID: \"b51558fc-0e2d-4299-a606-cbbe8992836c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-27wk7" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859920 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2409836-005c-4e36-ae98-14a3053117d1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gwpvv\" (UID: \"b2409836-005c-4e36-ae98-14a3053117d1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwpvv" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.859981 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e6ec47-1ff7-4c33-814a-3e25e3c40a88-config\") pod \"authentication-operator-69f744f599-pt5zn\" (UID: \"29e6ec47-1ff7-4c33-814a-3e25e3c40a88\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pt5zn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.860036 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.861068 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6cce2da-692f-432a-99fe-f0340759781d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5fd7z\" (UID: \"a6cce2da-692f-432a-99fe-f0340759781d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5fd7z" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.861831 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/976c4d52-a36d-43f0-ae70-921f30051080-config\") pod \"route-controller-manager-6576b87f9c-bfzn8\" (UID: \"976c4d52-a36d-43f0-ae70-921f30051080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.862252 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-config\") pod \"controller-manager-879f6c89f-k8744\" (UID: \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.863671 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.863833 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.863916 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb1a5f57-d662-40f5-96a5-bf9ca852e368-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wzx9r\" (UID: \"eb1a5f57-d662-40f5-96a5-bf9ca852e368\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wzx9r" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.863952 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf793c88-2d49-44e2-b11d-6ef660f25561-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gzrzs\" (UID: \"cf793c88-2d49-44e2-b11d-6ef660f25561\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzrzs" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.864060 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6cce2da-692f-432a-99fe-f0340759781d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5fd7z\" (UID: \"a6cce2da-692f-432a-99fe-f0340759781d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5fd7z" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.864103 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0be7e17-5f33-4b0b-a045-eb4e71c6f1b3-config\") pod \"kube-controller-manager-operator-78b949d7b-s2lqj\" (UID: \"d0be7e17-5f33-4b0b-a045-eb4e71c6f1b3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2lqj" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.864128 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b2409836-005c-4e36-ae98-14a3053117d1-images\") pod \"machine-api-operator-5694c8668f-gwpvv\" (UID: \"b2409836-005c-4e36-ae98-14a3053117d1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwpvv" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.864244 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-685j2\" (UniqueName: \"kubernetes.io/projected/976c4d52-a36d-43f0-ae70-921f30051080-kube-api-access-685j2\") pod \"route-controller-manager-6576b87f9c-bfzn8\" (UID: \"976c4d52-a36d-43f0-ae70-921f30051080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.864288 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-k8744\" (UID: \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.864332 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.864374 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/742c5936-25e8-4b3f-82ec-e9a0125b855c-serving-cert\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.864383 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e608454a-5352-4a77-80cb-5294bd1ae980-etcd-ca\") pod \"etcd-operator-b45778765-slg54\" (UID: \"e608454a-5352-4a77-80cb-5294bd1ae980\") " pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.864408 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqxlf\" (UniqueName: \"kubernetes.io/projected/742c5936-25e8-4b3f-82ec-e9a0125b855c-kube-api-access-sqxlf\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.864560 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e608454a-5352-4a77-80cb-5294bd1ae980-serving-cert\") pod \"etcd-operator-b45778765-slg54\" (UID: \"e608454a-5352-4a77-80cb-5294bd1ae980\") " pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.864832 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/976c4d52-a36d-43f0-ae70-921f30051080-client-ca\") pod \"route-controller-manager-6576b87f9c-bfzn8\" (UID: \"976c4d52-a36d-43f0-ae70-921f30051080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.872350 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e608454a-5352-4a77-80cb-5294bd1ae980-etcd-client\") pod \"etcd-operator-b45778765-slg54\" (UID: \"e608454a-5352-4a77-80cb-5294bd1ae980\") " pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.878840 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.878953 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.883937 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e9e2297-dd89-42c0-a954-65ef398b4618-serving-cert\") pod \"console-operator-58897d9998-kn8zr\" (UID: \"2e9e2297-dd89-42c0-a954-65ef398b4618\") " pod="openshift-console-operator/console-operator-58897d9998-kn8zr" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.886308 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2cd88982-bc5c-4811-9794-b7342f16d887-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7sxvt\" (UID: \"2cd88982-bc5c-4811-9794-b7342f16d887\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7sxvt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.886410 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/976c4d52-a36d-43f0-ae70-921f30051080-serving-cert\") pod \"route-controller-manager-6576b87f9c-bfzn8\" (UID: \"976c4d52-a36d-43f0-ae70-921f30051080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.886480 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzgrc\" (UniqueName: \"kubernetes.io/projected/29e6ec47-1ff7-4c33-814a-3e25e3c40a88-kube-api-access-mzgrc\") pod \"authentication-operator-69f744f599-pt5zn\" (UID: \"29e6ec47-1ff7-4c33-814a-3e25e3c40a88\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pt5zn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.886519 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-audit-policies\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.886587 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6cce2da-692f-432a-99fe-f0340759781d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5fd7z\" (UID: \"a6cce2da-692f-432a-99fe-f0340759781d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5fd7z" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.886618 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbqmx\" (UniqueName: \"kubernetes.io/projected/9fd2f5bb-569b-42a2-9e7e-b6309d58eec0-kube-api-access-hbqmx\") pod \"machine-approver-56656f9798-pq2qs\" (UID: \"9fd2f5bb-569b-42a2-9e7e-b6309d58eec0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pq2qs" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.886645 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.886680 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmhf2\" (UniqueName: \"kubernetes.io/projected/8279cb46-b6c7-4f7e-a572-f52bfecfaada-kube-api-access-gmhf2\") pod \"catalog-operator-68c6474976-499xn\" (UID: \"8279cb46-b6c7-4f7e-a572-f52bfecfaada\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-499xn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.886714 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/742c5936-25e8-4b3f-82ec-e9a0125b855c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.886871 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/efd736f9-2ca3-40e9-b51a-25f95ff4529c-audit\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.886901 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hq6g\" (UniqueName: \"kubernetes.io/projected/efd736f9-2ca3-40e9-b51a-25f95ff4529c-kube-api-access-8hq6g\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.886930 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b43c926-4b2b-4560-874a-25662916e05e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6ltm8\" (UID: \"5b43c926-4b2b-4560-874a-25662916e05e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6ltm8" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.886952 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efd736f9-2ca3-40e9-b51a-25f95ff4529c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.886980 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bd54177-64af-4b6a-952a-b1fce803a911-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mq2pr\" (UID: \"5bd54177-64af-4b6a-952a-b1fce803a911\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mq2pr" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.887001 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e608454a-5352-4a77-80cb-5294bd1ae980-etcd-service-ca\") pod \"etcd-operator-b45778765-slg54\" (UID: \"e608454a-5352-4a77-80cb-5294bd1ae980\") " pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.887061 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.887083 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29e6ec47-1ff7-4c33-814a-3e25e3c40a88-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pt5zn\" (UID: \"29e6ec47-1ff7-4c33-814a-3e25e3c40a88\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pt5zn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.887101 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9fd2f5bb-569b-42a2-9e7e-b6309d58eec0-auth-proxy-config\") pod \"machine-approver-56656f9798-pq2qs\" (UID: \"9fd2f5bb-569b-42a2-9e7e-b6309d58eec0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pq2qs" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.887124 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9fd2f5bb-569b-42a2-9e7e-b6309d58eec0-machine-approver-tls\") pod \"machine-approver-56656f9798-pq2qs\" (UID: \"9fd2f5bb-569b-42a2-9e7e-b6309d58eec0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pq2qs" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.887270 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-client-ca\") pod \"controller-manager-879f6c89f-k8744\" (UID: \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.887299 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.887660 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/742c5936-25e8-4b3f-82ec-e9a0125b855c-audit-dir\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.887692 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29e6ec47-1ff7-4c33-814a-3e25e3c40a88-serving-cert\") pod \"authentication-operator-69f744f599-pt5zn\" (UID: \"29e6ec47-1ff7-4c33-814a-3e25e3c40a88\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pt5zn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.887780 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/467b4db8-19ee-4476-b72a-158547e24884-audit-dir\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.887808 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.887839 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dcd4dd18-f71c-47de-be9e-7648df9eed36-default-certificate\") pod \"router-default-5444994796-njft5\" (UID: \"dcd4dd18-f71c-47de-be9e-7648df9eed36\") " pod="openshift-ingress/router-default-5444994796-njft5" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.887864 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/742c5936-25e8-4b3f-82ec-e9a0125b855c-encryption-config\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.888049 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-k8744\" (UID: \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.888349 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-audit-policies\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.888760 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.888884 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb48s" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.889622 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e608454a-5352-4a77-80cb-5294bd1ae980-config\") pod \"etcd-operator-b45778765-slg54\" (UID: \"e608454a-5352-4a77-80cb-5294bd1ae980\") " pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.889675 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.889710 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/742c5936-25e8-4b3f-82ec-e9a0125b855c-etcd-client\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.889797 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4p9m\" (UniqueName: \"kubernetes.io/projected/daf7d226-4f5d-4112-b02a-9eaae61a6d74-kube-api-access-k4p9m\") pod \"service-ca-9c57cc56f-hsnws\" (UID: \"daf7d226-4f5d-4112-b02a-9eaae61a6d74\") " pod="openshift-service-ca/service-ca-9c57cc56f-hsnws" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.889828 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xrrw\" (UniqueName: \"kubernetes.io/projected/e608454a-5352-4a77-80cb-5294bd1ae980-kube-api-access-2xrrw\") pod \"etcd-operator-b45778765-slg54\" (UID: \"e608454a-5352-4a77-80cb-5294bd1ae980\") " pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.889855 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/efd736f9-2ca3-40e9-b51a-25f95ff4529c-etcd-serving-ca\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.890833 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e608454a-5352-4a77-80cb-5294bd1ae980-etcd-service-ca\") pod \"etcd-operator-b45778765-slg54\" (UID: \"e608454a-5352-4a77-80cb-5294bd1ae980\") " pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.890906 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/467b4db8-19ee-4476-b72a-158547e24884-audit-dir\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.891440 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e9e2297-dd89-42c0-a954-65ef398b4618-trusted-ca\") pod \"console-operator-58897d9998-kn8zr\" (UID: \"2e9e2297-dd89-42c0-a954-65ef398b4618\") " pod="openshift-console-operator/console-operator-58897d9998-kn8zr" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.891958 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.892370 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e9e2297-dd89-42c0-a954-65ef398b4618-serving-cert\") pod \"console-operator-58897d9998-kn8zr\" (UID: \"2e9e2297-dd89-42c0-a954-65ef398b4618\") " pod="openshift-console-operator/console-operator-58897d9998-kn8zr" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.892652 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-serving-cert\") pod \"controller-manager-879f6c89f-k8744\" (UID: \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.892791 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-client-ca\") pod \"controller-manager-879f6c89f-k8744\" (UID: \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.893024 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6cce2da-692f-432a-99fe-f0340759781d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5fd7z\" (UID: \"a6cce2da-692f-432a-99fe-f0340759781d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5fd7z" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.893779 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.894649 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9rm6z"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.894823 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e608454a-5352-4a77-80cb-5294bd1ae980-config\") pod \"etcd-operator-b45778765-slg54\" (UID: \"e608454a-5352-4a77-80cb-5294bd1ae980\") " pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.895401 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-64tvm"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.895641 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.896299 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pt5zn"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.896408 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.896480 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mq2pr"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.896548 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-46lxq"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.896604 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.896804 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.896888 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/976c4d52-a36d-43f0-ae70-921f30051080-serving-cert\") pod \"route-controller-manager-6576b87f9c-bfzn8\" (UID: \"976c4d52-a36d-43f0-ae70-921f30051080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.897115 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.897521 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-64tvm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.897686 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.896501 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb1a5f57-d662-40f5-96a5-bf9ca852e368-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wzx9r\" (UID: \"eb1a5f57-d662-40f5-96a5-bf9ca852e368\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wzx9r" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.898653 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.899063 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.899318 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gwpvv"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.899350 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k8744"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.899365 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.899377 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wzx9r"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.899392 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5fd7z"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.899407 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ncp6l"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.900491 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-46lxq" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.901030 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-s77db"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.903320 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2lqj"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.903418 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.903451 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sz46g"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.903477 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6ltm8"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.904118 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dcd4dd18-f71c-47de-be9e-7648df9eed36-stats-auth\") pod \"router-default-5444994796-njft5\" (UID: \"dcd4dd18-f71c-47de-be9e-7648df9eed36\") " pod="openshift-ingress/router-default-5444994796-njft5" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.904413 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.904816 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e608454a-5352-4a77-80cb-5294bd1ae980-serving-cert\") pod \"etcd-operator-b45778765-slg54\" (UID: \"e608454a-5352-4a77-80cb-5294bd1ae980\") " pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.908797 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.909045 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd4dd18-f71c-47de-be9e-7648df9eed36-service-ca-bundle\") pod \"router-default-5444994796-njft5\" (UID: \"dcd4dd18-f71c-47de-be9e-7648df9eed36\") " pod="openshift-ingress/router-default-5444994796-njft5" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.909249 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-slg54"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.910087 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.911326 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dcd4dd18-f71c-47de-be9e-7648df9eed36-default-certificate\") pod \"router-default-5444994796-njft5\" (UID: \"dcd4dd18-f71c-47de-be9e-7648df9eed36\") " pod="openshift-ingress/router-default-5444994796-njft5" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.912182 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hsnws"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.913436 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-46lxq"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.914690 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcd4dd18-f71c-47de-be9e-7648df9eed36-metrics-certs\") pod \"router-default-5444994796-njft5\" (UID: \"dcd4dd18-f71c-47de-be9e-7648df9eed36\") " pod="openshift-ingress/router-default-5444994796-njft5" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.916076 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.917489 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-642ng"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.919766 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzrzs"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.922293 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-kn8zr"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.925213 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-27wk7"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.927254 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fb2qm"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.929527 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.933845 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29412000-pvwqn"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.935256 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7sxvt"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.936228 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xk9qc"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.937267 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb48s"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.938471 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6jjf6"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.939461 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-499xn"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.940607 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-d7x48"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.941672 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-64tvm"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.942951 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hqrgj"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.944046 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kdg2s"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.945025 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9lwf7"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.946022 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-tpd8x"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.946797 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tpd8x" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.947027 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-7988v"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.947415 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-7988v" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.948269 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9rm6z"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.949085 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dx5d9"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.949416 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.950867 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.951611 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b43c926-4b2b-4560-874a-25662916e05e-config\") pod \"kube-apiserver-operator-766d6c64bb-6ltm8\" (UID: \"5b43c926-4b2b-4560-874a-25662916e05e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6ltm8" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.952103 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tpd8x"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.953800 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6nkz"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.955429 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ncp6l"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.956986 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-nlnzd"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.957817 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nlnzd" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.958224 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nlnzd"] Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.969127 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.973976 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b43c926-4b2b-4560-874a-25662916e05e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6ltm8\" (UID: \"5b43c926-4b2b-4560-874a-25662916e05e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6ltm8" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.989602 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.990720 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2409836-005c-4e36-ae98-14a3053117d1-config\") pod \"machine-api-operator-5694c8668f-gwpvv\" (UID: \"b2409836-005c-4e36-ae98-14a3053117d1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwpvv" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.990763 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbrwr\" (UniqueName: \"kubernetes.io/projected/698014dc-c4df-4eae-b761-3f5192f6492a-kube-api-access-pbrwr\") pod \"ingress-operator-5b745b69d9-642ng\" (UID: \"698014dc-c4df-4eae-b761-3f5192f6492a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-642ng" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.990791 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efd736f9-2ca3-40e9-b51a-25f95ff4529c-config\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.990811 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61783bcf-b91f-4ba1-bfe5-7c4f0207cbc1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dx5d9\" (UID: \"61783bcf-b91f-4ba1-bfe5-7c4f0207cbc1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dx5d9" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.990827 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61783bcf-b91f-4ba1-bfe5-7c4f0207cbc1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dx5d9\" (UID: \"61783bcf-b91f-4ba1-bfe5-7c4f0207cbc1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dx5d9" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.990851 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/54f1e878-22b1-43ab-9225-7212ec9633e7-serviceca\") pod \"image-pruner-29412000-pvwqn\" (UID: \"54f1e878-22b1-43ab-9225-7212ec9633e7\") " pod="openshift-image-registry/image-pruner-29412000-pvwqn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.990879 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b51558fc-0e2d-4299-a606-cbbe8992836c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-27wk7\" (UID: \"b51558fc-0e2d-4299-a606-cbbe8992836c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-27wk7" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.990906 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0be7e17-5f33-4b0b-a045-eb4e71c6f1b3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-s2lqj\" (UID: \"d0be7e17-5f33-4b0b-a045-eb4e71c6f1b3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2lqj" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.990930 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9594ceca-a9f4-497a-876c-845411320228-service-ca\") pod \"console-f9d7485db-fb2qm\" (UID: \"9594ceca-a9f4-497a-876c-845411320228\") " pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.991346 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67d3c3b7-f552-4372-ac94-0baf8aaadd78-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6jjf6\" (UID: \"67d3c3b7-f552-4372-ac94-0baf8aaadd78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jjf6" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.991461 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/742c5936-25e8-4b3f-82ec-e9a0125b855c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.991634 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bkdp\" (UniqueName: \"kubernetes.io/projected/b2409836-005c-4e36-ae98-14a3053117d1-kube-api-access-2bkdp\") pod \"machine-api-operator-5694c8668f-gwpvv\" (UID: \"b2409836-005c-4e36-ae98-14a3053117d1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwpvv" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.991664 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/adbdf2c1-efd2-4036-b9be-e32b0ca196db-apiservice-cert\") pod \"packageserver-d55dfcdfc-995bc\" (UID: \"adbdf2c1-efd2-4036-b9be-e32b0ca196db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.991708 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd1a9d56-21a3-450e-b9af-fc132ee10466-config-volume\") pod \"collect-profiles-29412000-64tqq\" (UID: \"fd1a9d56-21a3-450e-b9af-fc132ee10466\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.991778 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2409836-005c-4e36-ae98-14a3053117d1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gwpvv\" (UID: \"b2409836-005c-4e36-ae98-14a3053117d1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwpvv" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.991800 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e6ec47-1ff7-4c33-814a-3e25e3c40a88-config\") pod \"authentication-operator-69f744f599-pt5zn\" (UID: \"29e6ec47-1ff7-4c33-814a-3e25e3c40a88\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pt5zn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.991830 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2409836-005c-4e36-ae98-14a3053117d1-config\") pod \"machine-api-operator-5694c8668f-gwpvv\" (UID: \"b2409836-005c-4e36-ae98-14a3053117d1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwpvv" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.991845 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efd736f9-2ca3-40e9-b51a-25f95ff4529c-config\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.991847 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0be7e17-5f33-4b0b-a045-eb4e71c6f1b3-config\") pod \"kube-controller-manager-operator-78b949d7b-s2lqj\" (UID: \"d0be7e17-5f33-4b0b-a045-eb4e71c6f1b3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2lqj" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.991911 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b2409836-005c-4e36-ae98-14a3053117d1-images\") pod \"machine-api-operator-5694c8668f-gwpvv\" (UID: \"b2409836-005c-4e36-ae98-14a3053117d1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwpvv" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.991945 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/742c5936-25e8-4b3f-82ec-e9a0125b855c-serving-cert\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.991969 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gsn4\" (UniqueName: \"kubernetes.io/projected/9594ceca-a9f4-497a-876c-845411320228-kube-api-access-9gsn4\") pod \"console-f9d7485db-fb2qm\" (UID: \"9594ceca-a9f4-497a-876c-845411320228\") " pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992001 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/54f1e878-22b1-43ab-9225-7212ec9633e7-serviceca\") pod \"image-pruner-29412000-pvwqn\" (UID: \"54f1e878-22b1-43ab-9225-7212ec9633e7\") " pod="openshift-image-registry/image-pruner-29412000-pvwqn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992015 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2cd88982-bc5c-4811-9794-b7342f16d887-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7sxvt\" (UID: \"2cd88982-bc5c-4811-9794-b7342f16d887\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7sxvt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992057 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbqmx\" (UniqueName: \"kubernetes.io/projected/9fd2f5bb-569b-42a2-9e7e-b6309d58eec0-kube-api-access-hbqmx\") pod \"machine-approver-56656f9798-pq2qs\" (UID: \"9fd2f5bb-569b-42a2-9e7e-b6309d58eec0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pq2qs" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992095 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9trm\" (UniqueName: \"kubernetes.io/projected/fd1a9d56-21a3-450e-b9af-fc132ee10466-kube-api-access-n9trm\") pod \"collect-profiles-29412000-64tqq\" (UID: \"fd1a9d56-21a3-450e-b9af-fc132ee10466\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992122 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hq6g\" (UniqueName: \"kubernetes.io/projected/efd736f9-2ca3-40e9-b51a-25f95ff4529c-kube-api-access-8hq6g\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992156 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bd54177-64af-4b6a-952a-b1fce803a911-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mq2pr\" (UID: \"5bd54177-64af-4b6a-952a-b1fce803a911\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mq2pr" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992177 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9fd2f5bb-569b-42a2-9e7e-b6309d58eec0-auth-proxy-config\") pod \"machine-approver-56656f9798-pq2qs\" (UID: \"9fd2f5bb-569b-42a2-9e7e-b6309d58eec0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pq2qs" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992216 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/742c5936-25e8-4b3f-82ec-e9a0125b855c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992209 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9594ceca-a9f4-497a-876c-845411320228-trusted-ca-bundle\") pod \"console-f9d7485db-fb2qm\" (UID: \"9594ceca-a9f4-497a-876c-845411320228\") " pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992270 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/742c5936-25e8-4b3f-82ec-e9a0125b855c-encryption-config\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992293 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/742c5936-25e8-4b3f-82ec-e9a0125b855c-etcd-client\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992314 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/698014dc-c4df-4eae-b761-3f5192f6492a-trusted-ca\") pod \"ingress-operator-5b745b69d9-642ng\" (UID: \"698014dc-c4df-4eae-b761-3f5192f6492a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-642ng" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992333 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/efd736f9-2ca3-40e9-b51a-25f95ff4529c-encryption-config\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992352 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bd54177-64af-4b6a-952a-b1fce803a911-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mq2pr\" (UID: \"5bd54177-64af-4b6a-952a-b1fce803a911\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mq2pr" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992369 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fd2f5bb-569b-42a2-9e7e-b6309d58eec0-config\") pod \"machine-approver-56656f9798-pq2qs\" (UID: \"9fd2f5bb-569b-42a2-9e7e-b6309d58eec0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pq2qs" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992384 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9594ceca-a9f4-497a-876c-845411320228-oauth-serving-cert\") pod \"console-f9d7485db-fb2qm\" (UID: \"9594ceca-a9f4-497a-876c-845411320228\") " pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992425 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9594ceca-a9f4-497a-876c-845411320228-console-serving-cert\") pod \"console-f9d7485db-fb2qm\" (UID: \"9594ceca-a9f4-497a-876c-845411320228\") " pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992445 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thm9p\" (UniqueName: \"kubernetes.io/projected/54f1e878-22b1-43ab-9225-7212ec9633e7-kube-api-access-thm9p\") pod \"image-pruner-29412000-pvwqn\" (UID: \"54f1e878-22b1-43ab-9225-7212ec9633e7\") " pod="openshift-image-registry/image-pruner-29412000-pvwqn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992465 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/efd736f9-2ca3-40e9-b51a-25f95ff4529c-node-pullsecrets\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992483 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61783bcf-b91f-4ba1-bfe5-7c4f0207cbc1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dx5d9\" (UID: \"61783bcf-b91f-4ba1-bfe5-7c4f0207cbc1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dx5d9" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992508 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/efd736f9-2ca3-40e9-b51a-25f95ff4529c-etcd-client\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992524 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/efd736f9-2ca3-40e9-b51a-25f95ff4529c-image-import-ca\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992545 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0be7e17-5f33-4b0b-a045-eb4e71c6f1b3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-s2lqj\" (UID: \"d0be7e17-5f33-4b0b-a045-eb4e71c6f1b3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2lqj" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992564 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp74f\" (UniqueName: \"kubernetes.io/projected/5bd54177-64af-4b6a-952a-b1fce803a911-kube-api-access-mp74f\") pod \"openshift-apiserver-operator-796bbdcf4f-mq2pr\" (UID: \"5bd54177-64af-4b6a-952a-b1fce803a911\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mq2pr" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992581 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz8kz\" (UniqueName: \"kubernetes.io/projected/adbdf2c1-efd2-4036-b9be-e32b0ca196db-kube-api-access-kz8kz\") pod \"packageserver-d55dfcdfc-995bc\" (UID: \"adbdf2c1-efd2-4036-b9be-e32b0ca196db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992626 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efd736f9-2ca3-40e9-b51a-25f95ff4529c-serving-cert\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992645 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtjj6\" (UniqueName: \"kubernetes.io/projected/b51558fc-0e2d-4299-a606-cbbe8992836c-kube-api-access-jtjj6\") pod \"openshift-controller-manager-operator-756b6f6bc6-27wk7\" (UID: \"b51558fc-0e2d-4299-a606-cbbe8992836c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-27wk7" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992664 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/698014dc-c4df-4eae-b761-3f5192f6492a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-642ng\" (UID: \"698014dc-c4df-4eae-b761-3f5192f6492a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-642ng" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992687 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88szd\" (UniqueName: \"kubernetes.io/projected/2cd88982-bc5c-4811-9794-b7342f16d887-kube-api-access-88szd\") pod \"control-plane-machine-set-operator-78cbb6b69f-7sxvt\" (UID: \"2cd88982-bc5c-4811-9794-b7342f16d887\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7sxvt" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992709 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9594ceca-a9f4-497a-876c-845411320228-console-oauth-config\") pod \"console-f9d7485db-fb2qm\" (UID: \"9594ceca-a9f4-497a-876c-845411320228\") " pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992724 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9594ceca-a9f4-497a-876c-845411320228-console-config\") pod \"console-f9d7485db-fb2qm\" (UID: \"9594ceca-a9f4-497a-876c-845411320228\") " pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992752 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/67d3c3b7-f552-4372-ac94-0baf8aaadd78-images\") pod \"machine-config-operator-74547568cd-6jjf6\" (UID: \"67d3c3b7-f552-4372-ac94-0baf8aaadd78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jjf6" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992770 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd1a9d56-21a3-450e-b9af-fc132ee10466-secret-volume\") pod \"collect-profiles-29412000-64tqq\" (UID: \"fd1a9d56-21a3-450e-b9af-fc132ee10466\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992787 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29e6ec47-1ff7-4c33-814a-3e25e3c40a88-service-ca-bundle\") pod \"authentication-operator-69f744f599-pt5zn\" (UID: \"29e6ec47-1ff7-4c33-814a-3e25e3c40a88\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pt5zn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992804 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/742c5936-25e8-4b3f-82ec-e9a0125b855c-audit-policies\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992837 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51558fc-0e2d-4299-a606-cbbe8992836c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-27wk7\" (UID: \"b51558fc-0e2d-4299-a606-cbbe8992836c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-27wk7" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992859 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb9dz\" (UniqueName: \"kubernetes.io/projected/fb160ac4-0c55-4a33-86b0-c8026712e657-kube-api-access-pb9dz\") pod \"migrator-59844c95c7-xk9qc\" (UID: \"fb160ac4-0c55-4a33-86b0-c8026712e657\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xk9qc" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992870 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0be7e17-5f33-4b0b-a045-eb4e71c6f1b3-config\") pod \"kube-controller-manager-operator-78b949d7b-s2lqj\" (UID: \"d0be7e17-5f33-4b0b-a045-eb4e71c6f1b3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2lqj" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.992889 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/67d3c3b7-f552-4372-ac94-0baf8aaadd78-proxy-tls\") pod \"machine-config-operator-74547568cd-6jjf6\" (UID: \"67d3c3b7-f552-4372-ac94-0baf8aaadd78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jjf6" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.993123 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b2409836-005c-4e36-ae98-14a3053117d1-images\") pod \"machine-api-operator-5694c8668f-gwpvv\" (UID: \"b2409836-005c-4e36-ae98-14a3053117d1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwpvv" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.993650 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/efd736f9-2ca3-40e9-b51a-25f95ff4529c-image-import-ca\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.994319 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29e6ec47-1ff7-4c33-814a-3e25e3c40a88-service-ca-bundle\") pod \"authentication-operator-69f744f599-pt5zn\" (UID: \"29e6ec47-1ff7-4c33-814a-3e25e3c40a88\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pt5zn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.994475 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/742c5936-25e8-4b3f-82ec-e9a0125b855c-audit-policies\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.994986 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bd54177-64af-4b6a-952a-b1fce803a911-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mq2pr\" (UID: \"5bd54177-64af-4b6a-952a-b1fce803a911\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mq2pr" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.995161 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/efd736f9-2ca3-40e9-b51a-25f95ff4529c-node-pullsecrets\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.995703 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fd2f5bb-569b-42a2-9e7e-b6309d58eec0-config\") pod \"machine-approver-56656f9798-pq2qs\" (UID: \"9fd2f5bb-569b-42a2-9e7e-b6309d58eec0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pq2qs" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.996764 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9fd2f5bb-569b-42a2-9e7e-b6309d58eec0-auth-proxy-config\") pod \"machine-approver-56656f9798-pq2qs\" (UID: \"9fd2f5bb-569b-42a2-9e7e-b6309d58eec0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pq2qs" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.996853 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/adbdf2c1-efd2-4036-b9be-e32b0ca196db-tmpfs\") pod \"packageserver-d55dfcdfc-995bc\" (UID: \"adbdf2c1-efd2-4036-b9be-e32b0ca196db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.996922 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqxlf\" (UniqueName: \"kubernetes.io/projected/742c5936-25e8-4b3f-82ec-e9a0125b855c-kube-api-access-sqxlf\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.997104 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzgrc\" (UniqueName: \"kubernetes.io/projected/29e6ec47-1ff7-4c33-814a-3e25e3c40a88-kube-api-access-mzgrc\") pod \"authentication-operator-69f744f599-pt5zn\" (UID: \"29e6ec47-1ff7-4c33-814a-3e25e3c40a88\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pt5zn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.997350 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e6ec47-1ff7-4c33-814a-3e25e3c40a88-config\") pod \"authentication-operator-69f744f599-pt5zn\" (UID: \"29e6ec47-1ff7-4c33-814a-3e25e3c40a88\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pt5zn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.997570 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/efd736f9-2ca3-40e9-b51a-25f95ff4529c-audit\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.997609 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/698014dc-c4df-4eae-b761-3f5192f6492a-metrics-tls\") pod \"ingress-operator-5b745b69d9-642ng\" (UID: \"698014dc-c4df-4eae-b761-3f5192f6492a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-642ng" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.997629 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/742c5936-25e8-4b3f-82ec-e9a0125b855c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.997664 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efd736f9-2ca3-40e9-b51a-25f95ff4529c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.997681 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvff8\" (UniqueName: \"kubernetes.io/projected/67d3c3b7-f552-4372-ac94-0baf8aaadd78-kube-api-access-rvff8\") pod \"machine-config-operator-74547568cd-6jjf6\" (UID: \"67d3c3b7-f552-4372-ac94-0baf8aaadd78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jjf6" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.997700 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29e6ec47-1ff7-4c33-814a-3e25e3c40a88-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pt5zn\" (UID: \"29e6ec47-1ff7-4c33-814a-3e25e3c40a88\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pt5zn" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.997716 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9fd2f5bb-569b-42a2-9e7e-b6309d58eec0-machine-approver-tls\") pod \"machine-approver-56656f9798-pq2qs\" (UID: \"9fd2f5bb-569b-42a2-9e7e-b6309d58eec0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pq2qs" Dec 03 00:08:40 crc kubenswrapper[4805]: I1203 00:08:40.998710 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/742c5936-25e8-4b3f-82ec-e9a0125b855c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.019809 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efd736f9-2ca3-40e9-b51a-25f95ff4529c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.019961 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/efd736f9-2ca3-40e9-b51a-25f95ff4529c-audit\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.023664 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2409836-005c-4e36-ae98-14a3053117d1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gwpvv\" (UID: \"b2409836-005c-4e36-ae98-14a3053117d1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwpvv" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.024993 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/742c5936-25e8-4b3f-82ec-e9a0125b855c-serving-cert\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.025139 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/efd736f9-2ca3-40e9-b51a-25f95ff4529c-etcd-client\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.025266 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/742c5936-25e8-4b3f-82ec-e9a0125b855c-audit-dir\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.025213 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.025357 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29e6ec47-1ff7-4c33-814a-3e25e3c40a88-serving-cert\") pod \"authentication-operator-69f744f599-pt5zn\" (UID: \"29e6ec47-1ff7-4c33-814a-3e25e3c40a88\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pt5zn" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.025628 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/efd736f9-2ca3-40e9-b51a-25f95ff4529c-etcd-serving-ca\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.026228 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bd54177-64af-4b6a-952a-b1fce803a911-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mq2pr\" (UID: \"5bd54177-64af-4b6a-952a-b1fce803a911\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mq2pr" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.026347 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/742c5936-25e8-4b3f-82ec-e9a0125b855c-audit-dir\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.026379 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/adbdf2c1-efd2-4036-b9be-e32b0ca196db-webhook-cert\") pod \"packageserver-d55dfcdfc-995bc\" (UID: \"adbdf2c1-efd2-4036-b9be-e32b0ca196db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.026455 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/efd736f9-2ca3-40e9-b51a-25f95ff4529c-audit-dir\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.026458 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/efd736f9-2ca3-40e9-b51a-25f95ff4529c-etcd-serving-ca\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.026622 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/efd736f9-2ca3-40e9-b51a-25f95ff4529c-audit-dir\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.026764 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/742c5936-25e8-4b3f-82ec-e9a0125b855c-encryption-config\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.027492 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9fd2f5bb-569b-42a2-9e7e-b6309d58eec0-machine-approver-tls\") pod \"machine-approver-56656f9798-pq2qs\" (UID: \"9fd2f5bb-569b-42a2-9e7e-b6309d58eec0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pq2qs" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.028465 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29e6ec47-1ff7-4c33-814a-3e25e3c40a88-serving-cert\") pod \"authentication-operator-69f744f599-pt5zn\" (UID: \"29e6ec47-1ff7-4c33-814a-3e25e3c40a88\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pt5zn" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.028447 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/742c5936-25e8-4b3f-82ec-e9a0125b855c-etcd-client\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.028686 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.030169 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/efd736f9-2ca3-40e9-b51a-25f95ff4529c-encryption-config\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.031565 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29e6ec47-1ff7-4c33-814a-3e25e3c40a88-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pt5zn\" (UID: \"29e6ec47-1ff7-4c33-814a-3e25e3c40a88\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pt5zn" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.032463 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efd736f9-2ca3-40e9-b51a-25f95ff4529c-serving-cert\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.034087 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0be7e17-5f33-4b0b-a045-eb4e71c6f1b3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-s2lqj\" (UID: \"d0be7e17-5f33-4b0b-a045-eb4e71c6f1b3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2lqj" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.035031 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf793c88-2d49-44e2-b11d-6ef660f25561-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gzrzs\" (UID: \"cf793c88-2d49-44e2-b11d-6ef660f25561\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzrzs" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.049599 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.070045 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.077829 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf793c88-2d49-44e2-b11d-6ef660f25561-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gzrzs\" (UID: \"cf793c88-2d49-44e2-b11d-6ef660f25561\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzrzs" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.090443 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.109526 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.117689 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2cd88982-bc5c-4811-9794-b7342f16d887-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7sxvt\" (UID: \"2cd88982-bc5c-4811-9794-b7342f16d887\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7sxvt" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.127543 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/698014dc-c4df-4eae-b761-3f5192f6492a-metrics-tls\") pod \"ingress-operator-5b745b69d9-642ng\" (UID: \"698014dc-c4df-4eae-b761-3f5192f6492a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-642ng" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.127909 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvff8\" (UniqueName: \"kubernetes.io/projected/67d3c3b7-f552-4372-ac94-0baf8aaadd78-kube-api-access-rvff8\") pod \"machine-config-operator-74547568cd-6jjf6\" (UID: \"67d3c3b7-f552-4372-ac94-0baf8aaadd78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jjf6" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.128382 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/adbdf2c1-efd2-4036-b9be-e32b0ca196db-webhook-cert\") pod \"packageserver-d55dfcdfc-995bc\" (UID: \"adbdf2c1-efd2-4036-b9be-e32b0ca196db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.128510 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbrwr\" (UniqueName: \"kubernetes.io/projected/698014dc-c4df-4eae-b761-3f5192f6492a-kube-api-access-pbrwr\") pod \"ingress-operator-5b745b69d9-642ng\" (UID: \"698014dc-c4df-4eae-b761-3f5192f6492a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-642ng" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.128541 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61783bcf-b91f-4ba1-bfe5-7c4f0207cbc1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dx5d9\" (UID: \"61783bcf-b91f-4ba1-bfe5-7c4f0207cbc1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dx5d9" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.128833 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61783bcf-b91f-4ba1-bfe5-7c4f0207cbc1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dx5d9\" (UID: \"61783bcf-b91f-4ba1-bfe5-7c4f0207cbc1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dx5d9" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.128985 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9594ceca-a9f4-497a-876c-845411320228-service-ca\") pod \"console-f9d7485db-fb2qm\" (UID: \"9594ceca-a9f4-497a-876c-845411320228\") " pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.129011 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67d3c3b7-f552-4372-ac94-0baf8aaadd78-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6jjf6\" (UID: \"67d3c3b7-f552-4372-ac94-0baf8aaadd78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jjf6" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.129088 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/adbdf2c1-efd2-4036-b9be-e32b0ca196db-apiservice-cert\") pod \"packageserver-d55dfcdfc-995bc\" (UID: \"adbdf2c1-efd2-4036-b9be-e32b0ca196db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.129157 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd1a9d56-21a3-450e-b9af-fc132ee10466-config-volume\") pod \"collect-profiles-29412000-64tqq\" (UID: \"fd1a9d56-21a3-450e-b9af-fc132ee10466\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.129873 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67d3c3b7-f552-4372-ac94-0baf8aaadd78-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6jjf6\" (UID: \"67d3c3b7-f552-4372-ac94-0baf8aaadd78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jjf6" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.129879 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.130305 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gsn4\" (UniqueName: \"kubernetes.io/projected/9594ceca-a9f4-497a-876c-845411320228-kube-api-access-9gsn4\") pod \"console-f9d7485db-fb2qm\" (UID: \"9594ceca-a9f4-497a-876c-845411320228\") " pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.130377 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9trm\" (UniqueName: \"kubernetes.io/projected/fd1a9d56-21a3-450e-b9af-fc132ee10466-kube-api-access-n9trm\") pod \"collect-profiles-29412000-64tqq\" (UID: \"fd1a9d56-21a3-450e-b9af-fc132ee10466\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.130491 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9594ceca-a9f4-497a-876c-845411320228-trusted-ca-bundle\") pod \"console-f9d7485db-fb2qm\" (UID: \"9594ceca-a9f4-497a-876c-845411320228\") " pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.130523 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/698014dc-c4df-4eae-b761-3f5192f6492a-trusted-ca\") pod \"ingress-operator-5b745b69d9-642ng\" (UID: \"698014dc-c4df-4eae-b761-3f5192f6492a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-642ng" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.130592 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9594ceca-a9f4-497a-876c-845411320228-oauth-serving-cert\") pod \"console-f9d7485db-fb2qm\" (UID: \"9594ceca-a9f4-497a-876c-845411320228\") " pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.130665 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9594ceca-a9f4-497a-876c-845411320228-console-serving-cert\") pod \"console-f9d7485db-fb2qm\" (UID: \"9594ceca-a9f4-497a-876c-845411320228\") " pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.130772 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61783bcf-b91f-4ba1-bfe5-7c4f0207cbc1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dx5d9\" (UID: \"61783bcf-b91f-4ba1-bfe5-7c4f0207cbc1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dx5d9" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.130831 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz8kz\" (UniqueName: \"kubernetes.io/projected/adbdf2c1-efd2-4036-b9be-e32b0ca196db-kube-api-access-kz8kz\") pod \"packageserver-d55dfcdfc-995bc\" (UID: \"adbdf2c1-efd2-4036-b9be-e32b0ca196db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.130920 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/698014dc-c4df-4eae-b761-3f5192f6492a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-642ng\" (UID: \"698014dc-c4df-4eae-b761-3f5192f6492a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-642ng" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.130988 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9594ceca-a9f4-497a-876c-845411320228-console-oauth-config\") pod \"console-f9d7485db-fb2qm\" (UID: \"9594ceca-a9f4-497a-876c-845411320228\") " pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.131006 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9594ceca-a9f4-497a-876c-845411320228-console-config\") pod \"console-f9d7485db-fb2qm\" (UID: \"9594ceca-a9f4-497a-876c-845411320228\") " pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.131102 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/67d3c3b7-f552-4372-ac94-0baf8aaadd78-images\") pod \"machine-config-operator-74547568cd-6jjf6\" (UID: \"67d3c3b7-f552-4372-ac94-0baf8aaadd78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jjf6" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.131122 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd1a9d56-21a3-450e-b9af-fc132ee10466-secret-volume\") pod \"collect-profiles-29412000-64tqq\" (UID: \"fd1a9d56-21a3-450e-b9af-fc132ee10466\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.131253 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb9dz\" (UniqueName: \"kubernetes.io/projected/fb160ac4-0c55-4a33-86b0-c8026712e657-kube-api-access-pb9dz\") pod \"migrator-59844c95c7-xk9qc\" (UID: \"fb160ac4-0c55-4a33-86b0-c8026712e657\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xk9qc" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.131375 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/67d3c3b7-f552-4372-ac94-0baf8aaadd78-proxy-tls\") pod \"machine-config-operator-74547568cd-6jjf6\" (UID: \"67d3c3b7-f552-4372-ac94-0baf8aaadd78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jjf6" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.131411 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/adbdf2c1-efd2-4036-b9be-e32b0ca196db-tmpfs\") pod \"packageserver-d55dfcdfc-995bc\" (UID: \"adbdf2c1-efd2-4036-b9be-e32b0ca196db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.131917 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/adbdf2c1-efd2-4036-b9be-e32b0ca196db-tmpfs\") pod \"packageserver-d55dfcdfc-995bc\" (UID: \"adbdf2c1-efd2-4036-b9be-e32b0ca196db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.150155 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.169600 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.174903 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b51558fc-0e2d-4299-a606-cbbe8992836c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-27wk7\" (UID: \"b51558fc-0e2d-4299-a606-cbbe8992836c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-27wk7" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.189729 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.194446 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51558fc-0e2d-4299-a606-cbbe8992836c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-27wk7\" (UID: \"b51558fc-0e2d-4299-a606-cbbe8992836c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-27wk7" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.209752 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.230756 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.250153 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.257113 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8279cb46-b6c7-4f7e-a572-f52bfecfaada-srv-cert\") pod \"catalog-operator-68c6474976-499xn\" (UID: \"8279cb46-b6c7-4f7e-a572-f52bfecfaada\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-499xn" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.269706 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.275386 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd1a9d56-21a3-450e-b9af-fc132ee10466-secret-volume\") pod \"collect-profiles-29412000-64tqq\" (UID: \"fd1a9d56-21a3-450e-b9af-fc132ee10466\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.276826 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8279cb46-b6c7-4f7e-a572-f52bfecfaada-profile-collector-cert\") pod \"catalog-operator-68c6474976-499xn\" (UID: \"8279cb46-b6c7-4f7e-a572-f52bfecfaada\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-499xn" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.291038 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.310655 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.329866 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.349385 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.353558 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/daf7d226-4f5d-4112-b02a-9eaae61a6d74-signing-key\") pod \"service-ca-9c57cc56f-hsnws\" (UID: \"daf7d226-4f5d-4112-b02a-9eaae61a6d74\") " pod="openshift-service-ca/service-ca-9c57cc56f-hsnws" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.369930 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.389693 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.391479 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/daf7d226-4f5d-4112-b02a-9eaae61a6d74-signing-cabundle\") pod \"service-ca-9c57cc56f-hsnws\" (UID: \"daf7d226-4f5d-4112-b02a-9eaae61a6d74\") " pod="openshift-service-ca/service-ca-9c57cc56f-hsnws" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.410193 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.430833 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.437052 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.442016 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/adbdf2c1-efd2-4036-b9be-e32b0ca196db-webhook-cert\") pod \"packageserver-d55dfcdfc-995bc\" (UID: \"adbdf2c1-efd2-4036-b9be-e32b0ca196db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.442493 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/adbdf2c1-efd2-4036-b9be-e32b0ca196db-apiservice-cert\") pod \"packageserver-d55dfcdfc-995bc\" (UID: \"adbdf2c1-efd2-4036-b9be-e32b0ca196db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.469176 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.489953 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.509881 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.529755 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.548818 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.569903 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.572950 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9594ceca-a9f4-497a-876c-845411320228-console-config\") pod \"console-f9d7485db-fb2qm\" (UID: \"9594ceca-a9f4-497a-876c-845411320228\") " pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.590427 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.594661 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9594ceca-a9f4-497a-876c-845411320228-console-oauth-config\") pod \"console-f9d7485db-fb2qm\" (UID: \"9594ceca-a9f4-497a-876c-845411320228\") " pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.609714 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.630253 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.634020 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9594ceca-a9f4-497a-876c-845411320228-console-serving-cert\") pod \"console-f9d7485db-fb2qm\" (UID: \"9594ceca-a9f4-497a-876c-845411320228\") " pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.650247 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.659862 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9594ceca-a9f4-497a-876c-845411320228-service-ca\") pod \"console-f9d7485db-fb2qm\" (UID: \"9594ceca-a9f4-497a-876c-845411320228\") " pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.676101 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.681838 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9594ceca-a9f4-497a-876c-845411320228-trusted-ca-bundle\") pod \"console-f9d7485db-fb2qm\" (UID: \"9594ceca-a9f4-497a-876c-845411320228\") " pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.689416 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.692002 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9594ceca-a9f4-497a-876c-845411320228-oauth-serving-cert\") pod \"console-f9d7485db-fb2qm\" (UID: \"9594ceca-a9f4-497a-876c-845411320228\") " pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.708837 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.729573 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.748756 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.769129 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.789752 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.809610 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.822162 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/698014dc-c4df-4eae-b761-3f5192f6492a-metrics-tls\") pod \"ingress-operator-5b745b69d9-642ng\" (UID: \"698014dc-c4df-4eae-b761-3f5192f6492a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-642ng" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.828002 4805 request.go:700] Waited for 1.011642384s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0 Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.834798 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.842897 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/698014dc-c4df-4eae-b761-3f5192f6492a-trusted-ca\") pod \"ingress-operator-5b745b69d9-642ng\" (UID: \"698014dc-c4df-4eae-b761-3f5192f6492a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-642ng" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.849702 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.869613 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.890621 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.910102 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.929497 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.948879 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.970400 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.989224 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 00:08:41 crc kubenswrapper[4805]: I1203 00:08:41.992002 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/67d3c3b7-f552-4372-ac94-0baf8aaadd78-images\") pod \"machine-config-operator-74547568cd-6jjf6\" (UID: \"67d3c3b7-f552-4372-ac94-0baf8aaadd78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jjf6" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.010112 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.012139 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61783bcf-b91f-4ba1-bfe5-7c4f0207cbc1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dx5d9\" (UID: \"61783bcf-b91f-4ba1-bfe5-7c4f0207cbc1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dx5d9" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.030047 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.049591 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.062860 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61783bcf-b91f-4ba1-bfe5-7c4f0207cbc1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dx5d9\" (UID: \"61783bcf-b91f-4ba1-bfe5-7c4f0207cbc1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dx5d9" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.070126 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.089757 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.094883 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/67d3c3b7-f552-4372-ac94-0baf8aaadd78-proxy-tls\") pod \"machine-config-operator-74547568cd-6jjf6\" (UID: \"67d3c3b7-f552-4372-ac94-0baf8aaadd78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jjf6" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.109966 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.129527 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 03 00:08:42 crc kubenswrapper[4805]: E1203 00:08:42.129565 4805 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Dec 03 00:08:42 crc kubenswrapper[4805]: E1203 00:08:42.129889 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd1a9d56-21a3-450e-b9af-fc132ee10466-config-volume podName:fd1a9d56-21a3-450e-b9af-fc132ee10466 nodeName:}" failed. No retries permitted until 2025-12-03 00:08:42.629861948 +0000 UTC m=+146.478824554 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd1a9d56-21a3-450e-b9af-fc132ee10466-config-volume") pod "collect-profiles-29412000-64tqq" (UID: "fd1a9d56-21a3-450e-b9af-fc132ee10466") : failed to sync configmap cache: timed out waiting for the condition Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.168441 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsl5w\" (UniqueName: \"kubernetes.io/projected/467b4db8-19ee-4476-b72a-158547e24884-kube-api-access-wsl5w\") pod \"oauth-openshift-558db77b4-sz46g\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.183668 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvx75\" (UniqueName: \"kubernetes.io/projected/cf793c88-2d49-44e2-b11d-6ef660f25561-kube-api-access-fvx75\") pod \"kube-storage-version-migrator-operator-b67b599dd-gzrzs\" (UID: \"cf793c88-2d49-44e2-b11d-6ef660f25561\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzrzs" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.209849 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssgcs\" (UniqueName: \"kubernetes.io/projected/57c96cff-592a-47c8-a038-6bb23bac6aa5-kube-api-access-ssgcs\") pod \"downloads-7954f5f757-hqrgj\" (UID: \"57c96cff-592a-47c8-a038-6bb23bac6aa5\") " pod="openshift-console/downloads-7954f5f757-hqrgj" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.223036 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dx8l\" (UniqueName: \"kubernetes.io/projected/a6cce2da-692f-432a-99fe-f0340759781d-kube-api-access-5dx8l\") pod \"cluster-image-registry-operator-dc59b4c8b-5fd7z\" (UID: \"a6cce2da-692f-432a-99fe-f0340759781d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5fd7z" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.229625 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.238919 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzrzs" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.263766 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hdch\" (UniqueName: \"kubernetes.io/projected/eb1a5f57-d662-40f5-96a5-bf9ca852e368-kube-api-access-4hdch\") pod \"cluster-samples-operator-665b6dd947-wzx9r\" (UID: \"eb1a5f57-d662-40f5-96a5-bf9ca852e368\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wzx9r" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.284748 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9hkf\" (UniqueName: \"kubernetes.io/projected/2e9e2297-dd89-42c0-a954-65ef398b4618-kube-api-access-q9hkf\") pod \"console-operator-58897d9998-kn8zr\" (UID: \"2e9e2297-dd89-42c0-a954-65ef398b4618\") " pod="openshift-console-operator/console-operator-58897d9998-kn8zr" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.303757 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj9bg\" (UniqueName: \"kubernetes.io/projected/dcd4dd18-f71c-47de-be9e-7648df9eed36-kube-api-access-rj9bg\") pod \"router-default-5444994796-njft5\" (UID: \"dcd4dd18-f71c-47de-be9e-7648df9eed36\") " pod="openshift-ingress/router-default-5444994796-njft5" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.323038 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bctj6\" (UniqueName: \"kubernetes.io/projected/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-kube-api-access-bctj6\") pod \"controller-manager-879f6c89f-k8744\" (UID: \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.343274 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-685j2\" (UniqueName: \"kubernetes.io/projected/976c4d52-a36d-43f0-ae70-921f30051080-kube-api-access-685j2\") pod \"route-controller-manager-6576b87f9c-bfzn8\" (UID: \"976c4d52-a36d-43f0-ae70-921f30051080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.348255 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:42 crc kubenswrapper[4805]: E1203 00:08:42.348428 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:10:44.348395516 +0000 UTC m=+268.197358132 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.348651 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.348881 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.349618 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.351579 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.364409 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmhf2\" (UniqueName: \"kubernetes.io/projected/8279cb46-b6c7-4f7e-a572-f52bfecfaada-kube-api-access-gmhf2\") pod \"catalog-operator-68c6474976-499xn\" (UID: \"8279cb46-b6c7-4f7e-a572-f52bfecfaada\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-499xn" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.384857 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b43c926-4b2b-4560-874a-25662916e05e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6ltm8\" (UID: \"5b43c926-4b2b-4560-874a-25662916e05e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6ltm8" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.406938 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6cce2da-692f-432a-99fe-f0340759781d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5fd7z\" (UID: \"a6cce2da-692f-432a-99fe-f0340759781d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5fd7z" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.413084 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.428861 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.444642 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-kn8zr" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.448281 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4p9m\" (UniqueName: \"kubernetes.io/projected/daf7d226-4f5d-4112-b02a-9eaae61a6d74-kube-api-access-k4p9m\") pod \"service-ca-9c57cc56f-hsnws\" (UID: \"daf7d226-4f5d-4112-b02a-9eaae61a6d74\") " pod="openshift-service-ca/service-ca-9c57cc56f-hsnws" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.449695 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.450223 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.450319 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.452576 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wzx9r" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.453925 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.456368 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.459070 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hqrgj" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.468230 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5fd7z" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.469609 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.490163 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.494522 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-njft5" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.509746 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.527608 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6ltm8" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.530490 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.532141 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzrzs"] Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.540286 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.555789 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.564248 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hsnws" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.569284 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.571687 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-499xn" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.589815 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.596019 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.610288 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.631680 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.638307 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.648758 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.650270 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.651986 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd1a9d56-21a3-450e-b9af-fc132ee10466-config-volume\") pod \"collect-profiles-29412000-64tqq\" (UID: \"fd1a9d56-21a3-450e-b9af-fc132ee10466\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.652937 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd1a9d56-21a3-450e-b9af-fc132ee10466-config-volume\") pod \"collect-profiles-29412000-64tqq\" (UID: \"fd1a9d56-21a3-450e-b9af-fc132ee10466\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.654608 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.668900 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sz46g"] Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.689267 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.691012 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xrrw\" (UniqueName: \"kubernetes.io/projected/e608454a-5352-4a77-80cb-5294bd1ae980-kube-api-access-2xrrw\") pod \"etcd-operator-b45778765-slg54\" (UID: \"e608454a-5352-4a77-80cb-5294bd1ae980\") " pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.710970 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.729825 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.750319 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.775290 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.790383 4805 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.817922 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.829639 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.847668 4805 request.go:700] Waited for 1.900604321s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.849380 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.877542 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.890275 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.913320 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.943582 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.954730 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.969790 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 00:08:42 crc kubenswrapper[4805]: I1203 00:08:42.990556 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.010418 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.036746 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.074159 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0be7e17-5f33-4b0b-a045-eb4e71c6f1b3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-s2lqj\" (UID: \"d0be7e17-5f33-4b0b-a045-eb4e71c6f1b3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2lqj" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.082831 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2lqj" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.087292 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bkdp\" (UniqueName: \"kubernetes.io/projected/b2409836-005c-4e36-ae98-14a3053117d1-kube-api-access-2bkdp\") pod \"machine-api-operator-5694c8668f-gwpvv\" (UID: \"b2409836-005c-4e36-ae98-14a3053117d1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwpvv" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.105784 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbqmx\" (UniqueName: \"kubernetes.io/projected/9fd2f5bb-569b-42a2-9e7e-b6309d58eec0-kube-api-access-hbqmx\") pod \"machine-approver-56656f9798-pq2qs\" (UID: \"9fd2f5bb-569b-42a2-9e7e-b6309d58eec0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pq2qs" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.125756 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hq6g\" (UniqueName: \"kubernetes.io/projected/efd736f9-2ca3-40e9-b51a-25f95ff4529c-kube-api-access-8hq6g\") pod \"apiserver-76f77b778f-d6jmb\" (UID: \"efd736f9-2ca3-40e9-b51a-25f95ff4529c\") " pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.129711 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.157512 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp74f\" (UniqueName: \"kubernetes.io/projected/5bd54177-64af-4b6a-952a-b1fce803a911-kube-api-access-mp74f\") pod \"openshift-apiserver-operator-796bbdcf4f-mq2pr\" (UID: \"5bd54177-64af-4b6a-952a-b1fce803a911\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mq2pr" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.173032 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtjj6\" (UniqueName: \"kubernetes.io/projected/b51558fc-0e2d-4299-a606-cbbe8992836c-kube-api-access-jtjj6\") pod \"openshift-controller-manager-operator-756b6f6bc6-27wk7\" (UID: \"b51558fc-0e2d-4299-a606-cbbe8992836c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-27wk7" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.196973 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gwpvv" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.206956 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-njft5" event={"ID":"dcd4dd18-f71c-47de-be9e-7648df9eed36","Type":"ContainerStarted","Data":"ba5244048ff9fa653fae1ba6bbe644c88b9ff5fb41e05938279052412aaf090e"} Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.207008 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-njft5" event={"ID":"dcd4dd18-f71c-47de-be9e-7648df9eed36","Type":"ContainerStarted","Data":"046f59cb9d4887fa077e404bd4a7ae97fc662ac3005cba4437f4fcace95ceccb"} Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.224992 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mq2pr" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.228015 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" event={"ID":"467b4db8-19ee-4476-b72a-158547e24884","Type":"ContainerStarted","Data":"bfc0782d6c6b1b9ecd0939ea9aff7b5ff6e37e2167023700b4a7b822006e141e"} Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.230649 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzrzs" event={"ID":"cf793c88-2d49-44e2-b11d-6ef660f25561","Type":"ContainerStarted","Data":"da5e05bb6a1c5c9aa333d99cb2ee27600597a6819db1c525ea2c4c37c9a077e0"} Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.231012 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzrzs" event={"ID":"cf793c88-2d49-44e2-b11d-6ef660f25561","Type":"ContainerStarted","Data":"0f03fac219301d734b7dd5055999657a2b29ba97b6569fdf4f27dc64c15c38da"} Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.237978 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thm9p\" (UniqueName: \"kubernetes.io/projected/54f1e878-22b1-43ab-9225-7212ec9633e7-kube-api-access-thm9p\") pod \"image-pruner-29412000-pvwqn\" (UID: \"54f1e878-22b1-43ab-9225-7212ec9633e7\") " pod="openshift-image-registry/image-pruner-29412000-pvwqn" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.250363 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pq2qs" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.252905 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88szd\" (UniqueName: \"kubernetes.io/projected/2cd88982-bc5c-4811-9794-b7342f16d887-kube-api-access-88szd\") pod \"control-plane-machine-set-operator-78cbb6b69f-7sxvt\" (UID: \"2cd88982-bc5c-4811-9794-b7342f16d887\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7sxvt" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.253979 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqxlf\" (UniqueName: \"kubernetes.io/projected/742c5936-25e8-4b3f-82ec-e9a0125b855c-kube-api-access-sqxlf\") pod \"apiserver-7bbb656c7d-wfzwm\" (UID: \"742c5936-25e8-4b3f-82ec-e9a0125b855c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.261679 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzgrc\" (UniqueName: \"kubernetes.io/projected/29e6ec47-1ff7-4c33-814a-3e25e3c40a88-kube-api-access-mzgrc\") pod \"authentication-operator-69f744f599-pt5zn\" (UID: \"29e6ec47-1ff7-4c33-814a-3e25e3c40a88\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pt5zn" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.274351 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvff8\" (UniqueName: \"kubernetes.io/projected/67d3c3b7-f552-4372-ac94-0baf8aaadd78-kube-api-access-rvff8\") pod \"machine-config-operator-74547568cd-6jjf6\" (UID: \"67d3c3b7-f552-4372-ac94-0baf8aaadd78\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jjf6" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.278719 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pt5zn" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.297762 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbrwr\" (UniqueName: \"kubernetes.io/projected/698014dc-c4df-4eae-b761-3f5192f6492a-kube-api-access-pbrwr\") pod \"ingress-operator-5b745b69d9-642ng\" (UID: \"698014dc-c4df-4eae-b761-3f5192f6492a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-642ng" Dec 03 00:08:43 crc kubenswrapper[4805]: W1203 00:08:43.307912 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fd2f5bb_569b_42a2_9e7e_b6309d58eec0.slice/crio-3fbcdeed61d33b7a0e6e0e92492f12cd8ef7887517edafcf709b766beddb07ee WatchSource:0}: Error finding container 3fbcdeed61d33b7a0e6e0e92492f12cd8ef7887517edafcf709b766beddb07ee: Status 404 returned error can't find the container with id 3fbcdeed61d33b7a0e6e0e92492f12cd8ef7887517edafcf709b766beddb07ee Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.321729 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61783bcf-b91f-4ba1-bfe5-7c4f0207cbc1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dx5d9\" (UID: \"61783bcf-b91f-4ba1-bfe5-7c4f0207cbc1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dx5d9" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.336094 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gsn4\" (UniqueName: \"kubernetes.io/projected/9594ceca-a9f4-497a-876c-845411320228-kube-api-access-9gsn4\") pod \"console-f9d7485db-fb2qm\" (UID: \"9594ceca-a9f4-497a-876c-845411320228\") " pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.342645 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29412000-pvwqn" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.346326 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-kn8zr"] Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.349463 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5fd7z"] Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.349525 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hqrgj"] Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.355095 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9trm\" (UniqueName: \"kubernetes.io/projected/fd1a9d56-21a3-450e-b9af-fc132ee10466-kube-api-access-n9trm\") pod \"collect-profiles-29412000-64tqq\" (UID: \"fd1a9d56-21a3-450e-b9af-fc132ee10466\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.371580 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz8kz\" (UniqueName: \"kubernetes.io/projected/adbdf2c1-efd2-4036-b9be-e32b0ca196db-kube-api-access-kz8kz\") pod \"packageserver-d55dfcdfc-995bc\" (UID: \"adbdf2c1-efd2-4036-b9be-e32b0ca196db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" Dec 03 00:08:43 crc kubenswrapper[4805]: W1203 00:08:43.381064 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57c96cff_592a_47c8_a038_6bb23bac6aa5.slice/crio-06dbf74ae3c92de942f53d04f0b5ba4af6d5dfb7ef08fbb034a14d494ff6f103 WatchSource:0}: Error finding container 06dbf74ae3c92de942f53d04f0b5ba4af6d5dfb7ef08fbb034a14d494ff6f103: Status 404 returned error can't find the container with id 06dbf74ae3c92de942f53d04f0b5ba4af6d5dfb7ef08fbb034a14d494ff6f103 Dec 03 00:08:43 crc kubenswrapper[4805]: W1203 00:08:43.385682 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6cce2da_692f_432a_99fe_f0340759781d.slice/crio-970cee43b08a18c8b00d9616c993827f5da3b2dd692d8ddc3ead7c36728b754f WatchSource:0}: Error finding container 970cee43b08a18c8b00d9616c993827f5da3b2dd692d8ddc3ead7c36728b754f: Status 404 returned error can't find the container with id 970cee43b08a18c8b00d9616c993827f5da3b2dd692d8ddc3ead7c36728b754f Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.397849 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/698014dc-c4df-4eae-b761-3f5192f6492a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-642ng\" (UID: \"698014dc-c4df-4eae-b761-3f5192f6492a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-642ng" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.410543 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb9dz\" (UniqueName: \"kubernetes.io/projected/fb160ac4-0c55-4a33-86b0-c8026712e657-kube-api-access-pb9dz\") pod \"migrator-59844c95c7-xk9qc\" (UID: \"fb160ac4-0c55-4a33-86b0-c8026712e657\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xk9qc" Dec 03 00:08:43 crc kubenswrapper[4805]: W1203 00:08:43.433482 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e9e2297_dd89_42c0_a954_65ef398b4618.slice/crio-11f0f7ecc176dd4ba5b71611ca066380dd9709a4680f8f9d000b4d27efbe5058 WatchSource:0}: Error finding container 11f0f7ecc176dd4ba5b71611ca066380dd9709a4680f8f9d000b4d27efbe5058: Status 404 returned error can't find the container with id 11f0f7ecc176dd4ba5b71611ca066380dd9709a4680f8f9d000b4d27efbe5058 Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.461832 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7sxvt" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.461911 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-27wk7" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.470669 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa7b63c1-20b5-4963-b55a-48b6d75052a5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-s77db\" (UID: \"aa7b63c1-20b5-4963-b55a-48b6d75052a5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s77db" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.470847 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae6f9f11-3892-49dc-848c-e40db6d4629b-serving-cert\") pod \"service-ca-operator-777779d784-9lwf7\" (UID: \"ae6f9f11-3892-49dc-848c-e40db6d4629b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9lwf7" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.470876 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa7b63c1-20b5-4963-b55a-48b6d75052a5-proxy-tls\") pod \"machine-config-controller-84d6567774-s77db\" (UID: \"aa7b63c1-20b5-4963-b55a-48b6d75052a5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s77db" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.470970 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sz8k\" (UniqueName: \"kubernetes.io/projected/ae6f9f11-3892-49dc-848c-e40db6d4629b-kube-api-access-6sz8k\") pod \"service-ca-operator-777779d784-9lwf7\" (UID: \"ae6f9f11-3892-49dc-848c-e40db6d4629b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9lwf7" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.471028 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7pzb\" (UniqueName: \"kubernetes.io/projected/4f26c935-53de-4459-862f-4d5fbbe97e88-kube-api-access-j7pzb\") pod \"olm-operator-6b444d44fb-pb48s\" (UID: \"4f26c935-53de-4459-862f-4d5fbbe97e88\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb48s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.471057 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/be50df79-cb92-4c40-81ea-a90cee61b549-registry-tls\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.471119 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wsr2\" (UniqueName: \"kubernetes.io/projected/c134545f-c51b-4a9b-8604-78d2a46de64d-kube-api-access-9wsr2\") pod \"multus-admission-controller-857f4d67dd-d7x48\" (UID: \"c134545f-c51b-4a9b-8604-78d2a46de64d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d7x48" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.471171 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/88be45c2-86f5-4e59-8a8d-903a7b898560-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-l6nkz\" (UID: \"88be45c2-86f5-4e59-8a8d-903a7b898560\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6nkz" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.471236 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/be50df79-cb92-4c40-81ea-a90cee61b549-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.471268 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxdf5\" (UniqueName: \"kubernetes.io/projected/aa7b63c1-20b5-4963-b55a-48b6d75052a5-kube-api-access-jxdf5\") pod \"machine-config-controller-84d6567774-s77db\" (UID: \"aa7b63c1-20b5-4963-b55a-48b6d75052a5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s77db" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.471381 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.471400 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/be50df79-cb92-4c40-81ea-a90cee61b549-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.471428 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4f26c935-53de-4459-862f-4d5fbbe97e88-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pb48s\" (UID: \"4f26c935-53de-4459-862f-4d5fbbe97e88\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb48s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.471473 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn4xl\" (UniqueName: \"kubernetes.io/projected/be50df79-cb92-4c40-81ea-a90cee61b549-kube-api-access-cn4xl\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.471510 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/be50df79-cb92-4c40-81ea-a90cee61b549-registry-certificates\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.471551 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be50df79-cb92-4c40-81ea-a90cee61b549-bound-sa-token\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.471644 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4f26c935-53de-4459-862f-4d5fbbe97e88-srv-cert\") pod \"olm-operator-6b444d44fb-pb48s\" (UID: \"4f26c935-53de-4459-862f-4d5fbbe97e88\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb48s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.471704 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptdjq\" (UniqueName: \"kubernetes.io/projected/88be45c2-86f5-4e59-8a8d-903a7b898560-kube-api-access-ptdjq\") pod \"package-server-manager-789f6589d5-l6nkz\" (UID: \"88be45c2-86f5-4e59-8a8d-903a7b898560\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6nkz" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.471750 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be50df79-cb92-4c40-81ea-a90cee61b549-trusted-ca\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.471930 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae6f9f11-3892-49dc-848c-e40db6d4629b-config\") pod \"service-ca-operator-777779d784-9lwf7\" (UID: \"ae6f9f11-3892-49dc-848c-e40db6d4629b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9lwf7" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.472049 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c134545f-c51b-4a9b-8604-78d2a46de64d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-d7x48\" (UID: \"c134545f-c51b-4a9b-8604-78d2a46de64d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d7x48" Dec 03 00:08:43 crc kubenswrapper[4805]: E1203 00:08:43.483485 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:43.983466446 +0000 UTC m=+147.832429112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.484987 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.493703 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.494961 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-njft5" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.509800 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.510349 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-642ng" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.518311 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hsnws"] Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.523358 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xk9qc" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.536675 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dx5d9" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.541759 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jjf6" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.548370 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6ltm8"] Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.576370 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.576653 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/be50df79-cb92-4c40-81ea-a90cee61b549-registry-certificates\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.576677 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be50df79-cb92-4c40-81ea-a90cee61b549-bound-sa-token\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.576699 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7c71a167-0d46-4202-b2b0-8e7eb6a0d932-mountpoint-dir\") pod \"csi-hostpathplugin-ncp6l\" (UID: \"7c71a167-0d46-4202-b2b0-8e7eb6a0d932\") " pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.576742 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7c71a167-0d46-4202-b2b0-8e7eb6a0d932-registration-dir\") pod \"csi-hostpathplugin-ncp6l\" (UID: \"7c71a167-0d46-4202-b2b0-8e7eb6a0d932\") " pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.576792 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7c71a167-0d46-4202-b2b0-8e7eb6a0d932-socket-dir\") pod \"csi-hostpathplugin-ncp6l\" (UID: \"7c71a167-0d46-4202-b2b0-8e7eb6a0d932\") " pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.576842 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4f26c935-53de-4459-862f-4d5fbbe97e88-srv-cert\") pod \"olm-operator-6b444d44fb-pb48s\" (UID: \"4f26c935-53de-4459-862f-4d5fbbe97e88\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb48s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.576860 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trcz8\" (UniqueName: \"kubernetes.io/projected/61cfa86d-9594-400f-991c-4819838ee49d-kube-api-access-trcz8\") pod \"marketplace-operator-79b997595-9rm6z\" (UID: \"61cfa86d-9594-400f-991c-4819838ee49d\") " pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.576886 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdjq\" (UniqueName: \"kubernetes.io/projected/88be45c2-86f5-4e59-8a8d-903a7b898560-kube-api-access-ptdjq\") pod \"package-server-manager-789f6589d5-l6nkz\" (UID: \"88be45c2-86f5-4e59-8a8d-903a7b898560\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6nkz" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.576920 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be50df79-cb92-4c40-81ea-a90cee61b549-trusted-ca\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.576942 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvxmp\" (UniqueName: \"kubernetes.io/projected/ae42d7f8-a970-4db2-bd74-1d0bf073d607-kube-api-access-hvxmp\") pod \"ingress-canary-tpd8x\" (UID: \"ae42d7f8-a970-4db2-bd74-1d0bf073d607\") " pod="openshift-ingress-canary/ingress-canary-tpd8x" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.576965 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61cfa86d-9594-400f-991c-4819838ee49d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9rm6z\" (UID: \"61cfa86d-9594-400f-991c-4819838ee49d\") " pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577093 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae6f9f11-3892-49dc-848c-e40db6d4629b-config\") pod \"service-ca-operator-777779d784-9lwf7\" (UID: \"ae6f9f11-3892-49dc-848c-e40db6d4629b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9lwf7" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577268 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c134545f-c51b-4a9b-8604-78d2a46de64d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-d7x48\" (UID: \"c134545f-c51b-4a9b-8604-78d2a46de64d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d7x48" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577337 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22c00906-1dcc-4400-8cae-dc050595ee91-metrics-tls\") pod \"dns-operator-744455d44c-64tvm\" (UID: \"22c00906-1dcc-4400-8cae-dc050595ee91\") " pod="openshift-dns-operator/dns-operator-744455d44c-64tvm" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577403 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/61cfa86d-9594-400f-991c-4819838ee49d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9rm6z\" (UID: \"61cfa86d-9594-400f-991c-4819838ee49d\") " pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577438 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa7b63c1-20b5-4963-b55a-48b6d75052a5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-s77db\" (UID: \"aa7b63c1-20b5-4963-b55a-48b6d75052a5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s77db" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577458 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae42d7f8-a970-4db2-bd74-1d0bf073d607-cert\") pod \"ingress-canary-tpd8x\" (UID: \"ae42d7f8-a970-4db2-bd74-1d0bf073d607\") " pod="openshift-ingress-canary/ingress-canary-tpd8x" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577480 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a7d00a4-3b99-44ea-b608-376ac0866cf2-serving-cert\") pod \"openshift-config-operator-7777fb866f-46lxq\" (UID: \"9a7d00a4-3b99-44ea-b608-376ac0866cf2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-46lxq" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577541 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9a7d00a4-3b99-44ea-b608-376ac0866cf2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-46lxq\" (UID: \"9a7d00a4-3b99-44ea-b608-376ac0866cf2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-46lxq" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577575 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae6f9f11-3892-49dc-848c-e40db6d4629b-serving-cert\") pod \"service-ca-operator-777779d784-9lwf7\" (UID: \"ae6f9f11-3892-49dc-848c-e40db6d4629b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9lwf7" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577596 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/84613154-8790-4a63-9359-bdfbdaaaee0c-node-bootstrap-token\") pod \"machine-config-server-7988v\" (UID: \"84613154-8790-4a63-9359-bdfbdaaaee0c\") " pod="openshift-machine-config-operator/machine-config-server-7988v" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577622 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa7b63c1-20b5-4963-b55a-48b6d75052a5-proxy-tls\") pod \"machine-config-controller-84d6567774-s77db\" (UID: \"aa7b63c1-20b5-4963-b55a-48b6d75052a5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s77db" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577637 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02d45a95-1e04-446b-a85f-af0dc2d6d453-config-volume\") pod \"dns-default-nlnzd\" (UID: \"02d45a95-1e04-446b-a85f-af0dc2d6d453\") " pod="openshift-dns/dns-default-nlnzd" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577674 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zqmz\" (UniqueName: \"kubernetes.io/projected/7c71a167-0d46-4202-b2b0-8e7eb6a0d932-kube-api-access-2zqmz\") pod \"csi-hostpathplugin-ncp6l\" (UID: \"7c71a167-0d46-4202-b2b0-8e7eb6a0d932\") " pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577690 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/84613154-8790-4a63-9359-bdfbdaaaee0c-certs\") pod \"machine-config-server-7988v\" (UID: \"84613154-8790-4a63-9359-bdfbdaaaee0c\") " pod="openshift-machine-config-operator/machine-config-server-7988v" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577707 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svf8g\" (UniqueName: \"kubernetes.io/projected/9a7d00a4-3b99-44ea-b608-376ac0866cf2-kube-api-access-svf8g\") pod \"openshift-config-operator-7777fb866f-46lxq\" (UID: \"9a7d00a4-3b99-44ea-b608-376ac0866cf2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-46lxq" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577733 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sz8k\" (UniqueName: \"kubernetes.io/projected/ae6f9f11-3892-49dc-848c-e40db6d4629b-kube-api-access-6sz8k\") pod \"service-ca-operator-777779d784-9lwf7\" (UID: \"ae6f9f11-3892-49dc-848c-e40db6d4629b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9lwf7" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577759 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7pzb\" (UniqueName: \"kubernetes.io/projected/4f26c935-53de-4459-862f-4d5fbbe97e88-kube-api-access-j7pzb\") pod \"olm-operator-6b444d44fb-pb48s\" (UID: \"4f26c935-53de-4459-862f-4d5fbbe97e88\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb48s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577780 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7c71a167-0d46-4202-b2b0-8e7eb6a0d932-plugins-dir\") pod \"csi-hostpathplugin-ncp6l\" (UID: \"7c71a167-0d46-4202-b2b0-8e7eb6a0d932\") " pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577801 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7c71a167-0d46-4202-b2b0-8e7eb6a0d932-csi-data-dir\") pod \"csi-hostpathplugin-ncp6l\" (UID: \"7c71a167-0d46-4202-b2b0-8e7eb6a0d932\") " pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577867 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf9fw\" (UniqueName: \"kubernetes.io/projected/02d45a95-1e04-446b-a85f-af0dc2d6d453-kube-api-access-hf9fw\") pod \"dns-default-nlnzd\" (UID: \"02d45a95-1e04-446b-a85f-af0dc2d6d453\") " pod="openshift-dns/dns-default-nlnzd" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577895 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/be50df79-cb92-4c40-81ea-a90cee61b549-registry-tls\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577913 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c8r5\" (UniqueName: \"kubernetes.io/projected/84613154-8790-4a63-9359-bdfbdaaaee0c-kube-api-access-2c8r5\") pod \"machine-config-server-7988v\" (UID: \"84613154-8790-4a63-9359-bdfbdaaaee0c\") " pod="openshift-machine-config-operator/machine-config-server-7988v" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577970 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wsr2\" (UniqueName: \"kubernetes.io/projected/c134545f-c51b-4a9b-8604-78d2a46de64d-kube-api-access-9wsr2\") pod \"multus-admission-controller-857f4d67dd-d7x48\" (UID: \"c134545f-c51b-4a9b-8604-78d2a46de64d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d7x48" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.577991 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twxw7\" (UniqueName: \"kubernetes.io/projected/22c00906-1dcc-4400-8cae-dc050595ee91-kube-api-access-twxw7\") pod \"dns-operator-744455d44c-64tvm\" (UID: \"22c00906-1dcc-4400-8cae-dc050595ee91\") " pod="openshift-dns-operator/dns-operator-744455d44c-64tvm" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.578041 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/88be45c2-86f5-4e59-8a8d-903a7b898560-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-l6nkz\" (UID: \"88be45c2-86f5-4e59-8a8d-903a7b898560\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6nkz" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.578067 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/be50df79-cb92-4c40-81ea-a90cee61b549-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.578119 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxdf5\" (UniqueName: \"kubernetes.io/projected/aa7b63c1-20b5-4963-b55a-48b6d75052a5-kube-api-access-jxdf5\") pod \"machine-config-controller-84d6567774-s77db\" (UID: \"aa7b63c1-20b5-4963-b55a-48b6d75052a5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s77db" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.578143 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/02d45a95-1e04-446b-a85f-af0dc2d6d453-metrics-tls\") pod \"dns-default-nlnzd\" (UID: \"02d45a95-1e04-446b-a85f-af0dc2d6d453\") " pod="openshift-dns/dns-default-nlnzd" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.578263 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/be50df79-cb92-4c40-81ea-a90cee61b549-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.578314 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4f26c935-53de-4459-862f-4d5fbbe97e88-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pb48s\" (UID: \"4f26c935-53de-4459-862f-4d5fbbe97e88\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb48s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.578341 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn4xl\" (UniqueName: \"kubernetes.io/projected/be50df79-cb92-4c40-81ea-a90cee61b549-kube-api-access-cn4xl\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: E1203 00:08:43.579165 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:44.079135592 +0000 UTC m=+147.928098198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.579601 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa7b63c1-20b5-4963-b55a-48b6d75052a5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-s77db\" (UID: \"aa7b63c1-20b5-4963-b55a-48b6d75052a5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s77db" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.581026 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/be50df79-cb92-4c40-81ea-a90cee61b549-registry-certificates\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.584755 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae6f9f11-3892-49dc-848c-e40db6d4629b-config\") pod \"service-ca-operator-777779d784-9lwf7\" (UID: \"ae6f9f11-3892-49dc-848c-e40db6d4629b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9lwf7" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.603444 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k8744"] Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.606890 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c134545f-c51b-4a9b-8604-78d2a46de64d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-d7x48\" (UID: \"c134545f-c51b-4a9b-8604-78d2a46de64d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d7x48" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.608562 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.608916 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/be50df79-cb92-4c40-81ea-a90cee61b549-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.609708 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be50df79-cb92-4c40-81ea-a90cee61b549-trusted-ca\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.621766 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa7b63c1-20b5-4963-b55a-48b6d75052a5-proxy-tls\") pod \"machine-config-controller-84d6567774-s77db\" (UID: \"aa7b63c1-20b5-4963-b55a-48b6d75052a5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s77db" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.632259 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4f26c935-53de-4459-862f-4d5fbbe97e88-srv-cert\") pod \"olm-operator-6b444d44fb-pb48s\" (UID: \"4f26c935-53de-4459-862f-4d5fbbe97e88\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb48s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.632598 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae6f9f11-3892-49dc-848c-e40db6d4629b-serving-cert\") pod \"service-ca-operator-777779d784-9lwf7\" (UID: \"ae6f9f11-3892-49dc-848c-e40db6d4629b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9lwf7" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.636168 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn4xl\" (UniqueName: \"kubernetes.io/projected/be50df79-cb92-4c40-81ea-a90cee61b549-kube-api-access-cn4xl\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.641332 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wzx9r"] Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.644296 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4f26c935-53de-4459-862f-4d5fbbe97e88-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pb48s\" (UID: \"4f26c935-53de-4459-862f-4d5fbbe97e88\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb48s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.644376 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/88be45c2-86f5-4e59-8a8d-903a7b898560-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-l6nkz\" (UID: \"88be45c2-86f5-4e59-8a8d-903a7b898560\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6nkz" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.644398 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be50df79-cb92-4c40-81ea-a90cee61b549-bound-sa-token\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.644956 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/be50df79-cb92-4c40-81ea-a90cee61b549-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.650453 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8"] Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.652846 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/be50df79-cb92-4c40-81ea-a90cee61b549-registry-tls\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.659317 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-499xn"] Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.674725 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wsr2\" (UniqueName: \"kubernetes.io/projected/c134545f-c51b-4a9b-8604-78d2a46de64d-kube-api-access-9wsr2\") pod \"multus-admission-controller-857f4d67dd-d7x48\" (UID: \"c134545f-c51b-4a9b-8604-78d2a46de64d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d7x48" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.680451 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:43 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:08:43 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:08:43 crc kubenswrapper[4805]: healthz check failed Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.680649 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684011 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22c00906-1dcc-4400-8cae-dc050595ee91-metrics-tls\") pod \"dns-operator-744455d44c-64tvm\" (UID: \"22c00906-1dcc-4400-8cae-dc050595ee91\") " pod="openshift-dns-operator/dns-operator-744455d44c-64tvm" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684047 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/61cfa86d-9594-400f-991c-4819838ee49d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9rm6z\" (UID: \"61cfa86d-9594-400f-991c-4819838ee49d\") " pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684069 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae42d7f8-a970-4db2-bd74-1d0bf073d607-cert\") pod \"ingress-canary-tpd8x\" (UID: \"ae42d7f8-a970-4db2-bd74-1d0bf073d607\") " pod="openshift-ingress-canary/ingress-canary-tpd8x" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684088 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a7d00a4-3b99-44ea-b608-376ac0866cf2-serving-cert\") pod \"openshift-config-operator-7777fb866f-46lxq\" (UID: \"9a7d00a4-3b99-44ea-b608-376ac0866cf2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-46lxq" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684110 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9a7d00a4-3b99-44ea-b608-376ac0866cf2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-46lxq\" (UID: \"9a7d00a4-3b99-44ea-b608-376ac0866cf2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-46lxq" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684127 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/84613154-8790-4a63-9359-bdfbdaaaee0c-node-bootstrap-token\") pod \"machine-config-server-7988v\" (UID: \"84613154-8790-4a63-9359-bdfbdaaaee0c\") " pod="openshift-machine-config-operator/machine-config-server-7988v" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684144 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02d45a95-1e04-446b-a85f-af0dc2d6d453-config-volume\") pod \"dns-default-nlnzd\" (UID: \"02d45a95-1e04-446b-a85f-af0dc2d6d453\") " pod="openshift-dns/dns-default-nlnzd" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684172 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zqmz\" (UniqueName: \"kubernetes.io/projected/7c71a167-0d46-4202-b2b0-8e7eb6a0d932-kube-api-access-2zqmz\") pod \"csi-hostpathplugin-ncp6l\" (UID: \"7c71a167-0d46-4202-b2b0-8e7eb6a0d932\") " pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684223 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/84613154-8790-4a63-9359-bdfbdaaaee0c-certs\") pod \"machine-config-server-7988v\" (UID: \"84613154-8790-4a63-9359-bdfbdaaaee0c\") " pod="openshift-machine-config-operator/machine-config-server-7988v" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684242 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svf8g\" (UniqueName: \"kubernetes.io/projected/9a7d00a4-3b99-44ea-b608-376ac0866cf2-kube-api-access-svf8g\") pod \"openshift-config-operator-7777fb866f-46lxq\" (UID: \"9a7d00a4-3b99-44ea-b608-376ac0866cf2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-46lxq" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684258 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7c71a167-0d46-4202-b2b0-8e7eb6a0d932-plugins-dir\") pod \"csi-hostpathplugin-ncp6l\" (UID: \"7c71a167-0d46-4202-b2b0-8e7eb6a0d932\") " pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684274 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7c71a167-0d46-4202-b2b0-8e7eb6a0d932-csi-data-dir\") pod \"csi-hostpathplugin-ncp6l\" (UID: \"7c71a167-0d46-4202-b2b0-8e7eb6a0d932\") " pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684303 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf9fw\" (UniqueName: \"kubernetes.io/projected/02d45a95-1e04-446b-a85f-af0dc2d6d453-kube-api-access-hf9fw\") pod \"dns-default-nlnzd\" (UID: \"02d45a95-1e04-446b-a85f-af0dc2d6d453\") " pod="openshift-dns/dns-default-nlnzd" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684325 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c8r5\" (UniqueName: \"kubernetes.io/projected/84613154-8790-4a63-9359-bdfbdaaaee0c-kube-api-access-2c8r5\") pod \"machine-config-server-7988v\" (UID: \"84613154-8790-4a63-9359-bdfbdaaaee0c\") " pod="openshift-machine-config-operator/machine-config-server-7988v" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684360 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twxw7\" (UniqueName: \"kubernetes.io/projected/22c00906-1dcc-4400-8cae-dc050595ee91-kube-api-access-twxw7\") pod \"dns-operator-744455d44c-64tvm\" (UID: \"22c00906-1dcc-4400-8cae-dc050595ee91\") " pod="openshift-dns-operator/dns-operator-744455d44c-64tvm" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684386 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/02d45a95-1e04-446b-a85f-af0dc2d6d453-metrics-tls\") pod \"dns-default-nlnzd\" (UID: \"02d45a95-1e04-446b-a85f-af0dc2d6d453\") " pod="openshift-dns/dns-default-nlnzd" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684415 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684443 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7c71a167-0d46-4202-b2b0-8e7eb6a0d932-mountpoint-dir\") pod \"csi-hostpathplugin-ncp6l\" (UID: \"7c71a167-0d46-4202-b2b0-8e7eb6a0d932\") " pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684462 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7c71a167-0d46-4202-b2b0-8e7eb6a0d932-registration-dir\") pod \"csi-hostpathplugin-ncp6l\" (UID: \"7c71a167-0d46-4202-b2b0-8e7eb6a0d932\") " pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684466 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptdjq\" (UniqueName: \"kubernetes.io/projected/88be45c2-86f5-4e59-8a8d-903a7b898560-kube-api-access-ptdjq\") pod \"package-server-manager-789f6589d5-l6nkz\" (UID: \"88be45c2-86f5-4e59-8a8d-903a7b898560\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6nkz" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684846 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7c71a167-0d46-4202-b2b0-8e7eb6a0d932-socket-dir\") pod \"csi-hostpathplugin-ncp6l\" (UID: \"7c71a167-0d46-4202-b2b0-8e7eb6a0d932\") " pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684479 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7c71a167-0d46-4202-b2b0-8e7eb6a0d932-socket-dir\") pod \"csi-hostpathplugin-ncp6l\" (UID: \"7c71a167-0d46-4202-b2b0-8e7eb6a0d932\") " pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684919 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trcz8\" (UniqueName: \"kubernetes.io/projected/61cfa86d-9594-400f-991c-4819838ee49d-kube-api-access-trcz8\") pod \"marketplace-operator-79b997595-9rm6z\" (UID: \"61cfa86d-9594-400f-991c-4819838ee49d\") " pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684955 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvxmp\" (UniqueName: \"kubernetes.io/projected/ae42d7f8-a970-4db2-bd74-1d0bf073d607-kube-api-access-hvxmp\") pod \"ingress-canary-tpd8x\" (UID: \"ae42d7f8-a970-4db2-bd74-1d0bf073d607\") " pod="openshift-ingress-canary/ingress-canary-tpd8x" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.684973 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61cfa86d-9594-400f-991c-4819838ee49d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9rm6z\" (UID: \"61cfa86d-9594-400f-991c-4819838ee49d\") " pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" Dec 03 00:08:43 crc kubenswrapper[4805]: E1203 00:08:43.685159 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:44.185144639 +0000 UTC m=+148.034107245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.685311 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7c71a167-0d46-4202-b2b0-8e7eb6a0d932-plugins-dir\") pod \"csi-hostpathplugin-ncp6l\" (UID: \"7c71a167-0d46-4202-b2b0-8e7eb6a0d932\") " pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.685347 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7c71a167-0d46-4202-b2b0-8e7eb6a0d932-mountpoint-dir\") pod \"csi-hostpathplugin-ncp6l\" (UID: \"7c71a167-0d46-4202-b2b0-8e7eb6a0d932\") " pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.685441 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7c71a167-0d46-4202-b2b0-8e7eb6a0d932-registration-dir\") pod \"csi-hostpathplugin-ncp6l\" (UID: \"7c71a167-0d46-4202-b2b0-8e7eb6a0d932\") " pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.685578 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7c71a167-0d46-4202-b2b0-8e7eb6a0d932-csi-data-dir\") pod \"csi-hostpathplugin-ncp6l\" (UID: \"7c71a167-0d46-4202-b2b0-8e7eb6a0d932\") " pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.686711 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02d45a95-1e04-446b-a85f-af0dc2d6d453-config-volume\") pod \"dns-default-nlnzd\" (UID: \"02d45a95-1e04-446b-a85f-af0dc2d6d453\") " pod="openshift-dns/dns-default-nlnzd" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.688378 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9a7d00a4-3b99-44ea-b608-376ac0866cf2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-46lxq\" (UID: \"9a7d00a4-3b99-44ea-b608-376ac0866cf2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-46lxq" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.693991 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61cfa86d-9594-400f-991c-4819838ee49d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9rm6z\" (UID: \"61cfa86d-9594-400f-991c-4819838ee49d\") " pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.703041 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/61cfa86d-9594-400f-991c-4819838ee49d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9rm6z\" (UID: \"61cfa86d-9594-400f-991c-4819838ee49d\") " pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.703285 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7pzb\" (UniqueName: \"kubernetes.io/projected/4f26c935-53de-4459-862f-4d5fbbe97e88-kube-api-access-j7pzb\") pod \"olm-operator-6b444d44fb-pb48s\" (UID: \"4f26c935-53de-4459-862f-4d5fbbe97e88\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb48s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.703625 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ae42d7f8-a970-4db2-bd74-1d0bf073d607-cert\") pod \"ingress-canary-tpd8x\" (UID: \"ae42d7f8-a970-4db2-bd74-1d0bf073d607\") " pod="openshift-ingress-canary/ingress-canary-tpd8x" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.704041 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/84613154-8790-4a63-9359-bdfbdaaaee0c-node-bootstrap-token\") pod \"machine-config-server-7988v\" (UID: \"84613154-8790-4a63-9359-bdfbdaaaee0c\") " pod="openshift-machine-config-operator/machine-config-server-7988v" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.704346 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/02d45a95-1e04-446b-a85f-af0dc2d6d453-metrics-tls\") pod \"dns-default-nlnzd\" (UID: \"02d45a95-1e04-446b-a85f-af0dc2d6d453\") " pod="openshift-dns/dns-default-nlnzd" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.704727 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22c00906-1dcc-4400-8cae-dc050595ee91-metrics-tls\") pod \"dns-operator-744455d44c-64tvm\" (UID: \"22c00906-1dcc-4400-8cae-dc050595ee91\") " pod="openshift-dns-operator/dns-operator-744455d44c-64tvm" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.705241 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a7d00a4-3b99-44ea-b608-376ac0866cf2-serving-cert\") pod \"openshift-config-operator-7777fb866f-46lxq\" (UID: \"9a7d00a4-3b99-44ea-b608-376ac0866cf2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-46lxq" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.708602 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/84613154-8790-4a63-9359-bdfbdaaaee0c-certs\") pod \"machine-config-server-7988v\" (UID: \"84613154-8790-4a63-9359-bdfbdaaaee0c\") " pod="openshift-machine-config-operator/machine-config-server-7988v" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.713707 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sz8k\" (UniqueName: \"kubernetes.io/projected/ae6f9f11-3892-49dc-848c-e40db6d4629b-kube-api-access-6sz8k\") pod \"service-ca-operator-777779d784-9lwf7\" (UID: \"ae6f9f11-3892-49dc-848c-e40db6d4629b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9lwf7" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.786553 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:43 crc kubenswrapper[4805]: E1203 00:08:43.787039 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:44.28701052 +0000 UTC m=+148.135973126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.788007 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9lwf7" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.833785 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-slg54"] Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.840159 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twxw7\" (UniqueName: \"kubernetes.io/projected/22c00906-1dcc-4400-8cae-dc050595ee91-kube-api-access-twxw7\") pod \"dns-operator-744455d44c-64tvm\" (UID: \"22c00906-1dcc-4400-8cae-dc050595ee91\") " pod="openshift-dns-operator/dns-operator-744455d44c-64tvm" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.853267 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-d7x48" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.860397 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d6jmb"] Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.860643 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6nkz" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.868493 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb48s" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.889275 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svf8g\" (UniqueName: \"kubernetes.io/projected/9a7d00a4-3b99-44ea-b608-376ac0866cf2-kube-api-access-svf8g\") pod \"openshift-config-operator-7777fb866f-46lxq\" (UID: \"9a7d00a4-3b99-44ea-b608-376ac0866cf2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-46lxq" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.889410 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxdf5\" (UniqueName: \"kubernetes.io/projected/aa7b63c1-20b5-4963-b55a-48b6d75052a5-kube-api-access-jxdf5\") pod \"machine-config-controller-84d6567774-s77db\" (UID: \"aa7b63c1-20b5-4963-b55a-48b6d75052a5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s77db" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.889575 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-64tvm" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.889635 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:43 crc kubenswrapper[4805]: E1203 00:08:43.890163 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:44.390031092 +0000 UTC m=+148.238993688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.890883 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c8r5\" (UniqueName: \"kubernetes.io/projected/84613154-8790-4a63-9359-bdfbdaaaee0c-kube-api-access-2c8r5\") pod \"machine-config-server-7988v\" (UID: \"84613154-8790-4a63-9359-bdfbdaaaee0c\") " pod="openshift-machine-config-operator/machine-config-server-7988v" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.892587 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf9fw\" (UniqueName: \"kubernetes.io/projected/02d45a95-1e04-446b-a85f-af0dc2d6d453-kube-api-access-hf9fw\") pod \"dns-default-nlnzd\" (UID: \"02d45a95-1e04-446b-a85f-af0dc2d6d453\") " pod="openshift-dns/dns-default-nlnzd" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.902019 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trcz8\" (UniqueName: \"kubernetes.io/projected/61cfa86d-9594-400f-991c-4819838ee49d-kube-api-access-trcz8\") pod \"marketplace-operator-79b997595-9rm6z\" (UID: \"61cfa86d-9594-400f-991c-4819838ee49d\") " pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.904704 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-46lxq" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.910162 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvxmp\" (UniqueName: \"kubernetes.io/projected/ae42d7f8-a970-4db2-bd74-1d0bf073d607-kube-api-access-hvxmp\") pod \"ingress-canary-tpd8x\" (UID: \"ae42d7f8-a970-4db2-bd74-1d0bf073d607\") " pod="openshift-ingress-canary/ingress-canary-tpd8x" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.927024 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tpd8x" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.934289 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-7988v" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.940496 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nlnzd" Dec 03 00:08:43 crc kubenswrapper[4805]: I1203 00:08:43.991853 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:43 crc kubenswrapper[4805]: E1203 00:08:43.992569 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:44.492500449 +0000 UTC m=+148.341463065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.043632 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zqmz\" (UniqueName: \"kubernetes.io/projected/7c71a167-0d46-4202-b2b0-8e7eb6a0d932-kube-api-access-2zqmz\") pod \"csi-hostpathplugin-ncp6l\" (UID: \"7c71a167-0d46-4202-b2b0-8e7eb6a0d932\") " pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.084649 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mq2pr"] Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.085799 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2lqj"] Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.095093 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:44 crc kubenswrapper[4805]: E1203 00:08:44.095582 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:44.595562072 +0000 UTC m=+148.444524678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:44 crc kubenswrapper[4805]: W1203 00:08:44.108663 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefd736f9_2ca3_40e9_b51a_25f95ff4529c.slice/crio-53a7946d6732eb41744cf0554be82bacf23f3514fce573fbafb107b8f16700b5 WatchSource:0}: Error finding container 53a7946d6732eb41744cf0554be82bacf23f3514fce573fbafb107b8f16700b5: Status 404 returned error can't find the container with id 53a7946d6732eb41744cf0554be82bacf23f3514fce573fbafb107b8f16700b5 Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.116056 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s77db" Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.128095 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gwpvv"] Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.176007 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pt5zn"] Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.177702 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.196447 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:44 crc kubenswrapper[4805]: E1203 00:08:44.196669 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:44.696622273 +0000 UTC m=+148.545584879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.197432 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:44 crc kubenswrapper[4805]: E1203 00:08:44.197882 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:44.697865025 +0000 UTC m=+148.546827631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.220898 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" Dec 03 00:08:44 crc kubenswrapper[4805]: W1203 00:08:44.236534 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bd54177_64af_4b6a_952a_b1fce803a911.slice/crio-bc8ae2c3dd569b734fbad5ef7adeb035f8348f9318175cc23d12e669d74892a1 WatchSource:0}: Error finding container bc8ae2c3dd569b734fbad5ef7adeb035f8348f9318175cc23d12e669d74892a1: Status 404 returned error can't find the container with id bc8ae2c3dd569b734fbad5ef7adeb035f8348f9318175cc23d12e669d74892a1 Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.252418 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-kn8zr" event={"ID":"2e9e2297-dd89-42c0-a954-65ef398b4618","Type":"ContainerStarted","Data":"11f0f7ecc176dd4ba5b71611ca066380dd9709a4680f8f9d000b4d27efbe5058"} Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.254871 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pq2qs" event={"ID":"9fd2f5bb-569b-42a2-9e7e-b6309d58eec0","Type":"ContainerStarted","Data":"007dbefd9e9e768f673599484ddce16a3279d5d59dede8842dca28716cde7b7a"} Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.255047 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pq2qs" event={"ID":"9fd2f5bb-569b-42a2-9e7e-b6309d58eec0","Type":"ContainerStarted","Data":"3fbcdeed61d33b7a0e6e0e92492f12cd8ef7887517edafcf709b766beddb07ee"} Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.256258 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" event={"ID":"efd736f9-2ca3-40e9-b51a-25f95ff4529c","Type":"ContainerStarted","Data":"53a7946d6732eb41744cf0554be82bacf23f3514fce573fbafb107b8f16700b5"} Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.258528 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-499xn" event={"ID":"8279cb46-b6c7-4f7e-a572-f52bfecfaada","Type":"ContainerStarted","Data":"d181c0ad4832eecd3cebc477e0277a029ae5e98ba9d4496d90c6e58468b2dab2"} Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.261644 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hsnws" event={"ID":"daf7d226-4f5d-4112-b02a-9eaae61a6d74","Type":"ContainerStarted","Data":"9ebb3e39e9fb7337f7822ab7c2ccb358ff55eedadf62f94ed86a1725feb4109c"} Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.263838 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"42b789e583d847ad86b23f87c095528ae4eca8a173c4c7624d4ef3b4ae300796"} Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.264947 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"204075dee965e0cca049245a2158e0e0a494b5266eb5383d3b509e611019edf9"} Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.268813 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" event={"ID":"467b4db8-19ee-4476-b72a-158547e24884","Type":"ContainerStarted","Data":"eac0ab83ce3b1e5cacfc6355cdaf7c80768008f18475ab0fb432671b30076cd1"} Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.269662 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.272664 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5fd7z" event={"ID":"a6cce2da-692f-432a-99fe-f0340759781d","Type":"ContainerStarted","Data":"2259baa295a6b5d36a7e4946df12b0916db757b2d24d7b2b7d8b0a8277b32ed6"} Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.272693 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5fd7z" event={"ID":"a6cce2da-692f-432a-99fe-f0340759781d","Type":"ContainerStarted","Data":"970cee43b08a18c8b00d9616c993827f5da3b2dd692d8ddc3ead7c36728b754f"} Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.285527 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" event={"ID":"e608454a-5352-4a77-80cb-5294bd1ae980","Type":"ContainerStarted","Data":"f4de804af9ac58901f934c6462f2decd1d12696a429cc28d2e4ccfe9997a6eb6"} Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.293596 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hqrgj" event={"ID":"57c96cff-592a-47c8-a038-6bb23bac6aa5","Type":"ContainerStarted","Data":"3c97aeb4d00baf42cc57d9937169feaf99215de340a119e9b371d2d22313cbb4"} Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.293673 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hqrgj" event={"ID":"57c96cff-592a-47c8-a038-6bb23bac6aa5","Type":"ContainerStarted","Data":"06dbf74ae3c92de942f53d04f0b5ba4af6d5dfb7ef08fbb034a14d494ff6f103"} Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.294302 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hqrgj" Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.299183 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" event={"ID":"ea247096-f6e0-490e-8fdd-3d6b6ce7a787","Type":"ContainerStarted","Data":"c9f0f0c69c667f5ce4d98e8e47c5fcded675f884e94b57286e7a5705bc25ba93"} Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.300934 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e3821ab09727028091a3094014322e21dd94a319166c4442ac5fe5687c522b0c"} Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.302266 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" event={"ID":"976c4d52-a36d-43f0-ae70-921f30051080","Type":"ContainerStarted","Data":"75944fb132c073036f3ae9167001828c20e680d261017e624b4e0b72700779f7"} Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.319657 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6ltm8" event={"ID":"5b43c926-4b2b-4560-874a-25662916e05e","Type":"ContainerStarted","Data":"dbb06faaae1c0346a0be2b6f2001ff08fa18c3414fa11e29c08ecddfd6a60766"} Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.338424 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:44 crc kubenswrapper[4805]: E1203 00:08:44.339052 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:44.839013229 +0000 UTC m=+148.687975835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.339458 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:44 crc kubenswrapper[4805]: E1203 00:08:44.340030 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:44.840005983 +0000 UTC m=+148.688968629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.440759 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:44 crc kubenswrapper[4805]: E1203 00:08:44.440965 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:44.940937083 +0000 UTC m=+148.789899689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.441211 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:44 crc kubenswrapper[4805]: E1203 00:08:44.445769 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:44.945750994 +0000 UTC m=+148.794713600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.542512 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:44 crc kubenswrapper[4805]: E1203 00:08:44.542957 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:45.042931927 +0000 UTC m=+148.891894533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.568018 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:44 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:08:44 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:08:44 crc kubenswrapper[4805]: healthz check failed Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.568077 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.613428 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqrgj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.613603 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hqrgj" podUID="57c96cff-592a-47c8-a038-6bb23bac6aa5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.623016 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.644222 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:44 crc kubenswrapper[4805]: E1203 00:08:44.647988 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:45.147961939 +0000 UTC m=+148.996924545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.752946 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:44 crc kubenswrapper[4805]: E1203 00:08:44.753395 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:45.253361111 +0000 UTC m=+149.102323717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.753698 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:44 crc kubenswrapper[4805]: E1203 00:08:44.754102 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:45.254086549 +0000 UTC m=+149.103049155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.826739 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-njft5" podStartSLOduration=129.826713743 podStartE2EDuration="2m9.826713743s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:44.803309552 +0000 UTC m=+148.652272178" watchObservedRunningTime="2025-12-03 00:08:44.826713743 +0000 UTC m=+148.675676349" Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.857736 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:44 crc kubenswrapper[4805]: E1203 00:08:44.858099 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:45.358076295 +0000 UTC m=+149.207038901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:44 crc kubenswrapper[4805]: I1203 00:08:44.959791 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:44 crc kubenswrapper[4805]: E1203 00:08:44.960222 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:45.460209424 +0000 UTC m=+149.309172030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.137037 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:45 crc kubenswrapper[4805]: E1203 00:08:45.137389 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:45.637344087 +0000 UTC m=+149.486306693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.137459 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:45 crc kubenswrapper[4805]: E1203 00:08:45.137896 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:45.63787998 +0000 UTC m=+149.486842586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.222080 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" podStartSLOduration=130.222056085 podStartE2EDuration="2m10.222056085s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:45.102462246 +0000 UTC m=+148.951424852" watchObservedRunningTime="2025-12-03 00:08:45.222056085 +0000 UTC m=+149.071018691" Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.238339 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:45 crc kubenswrapper[4805]: E1203 00:08:45.238697 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:45.738674795 +0000 UTC m=+149.587637401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.293987 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm"] Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.294435 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29412000-pvwqn"] Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.312879 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2lqj" event={"ID":"d0be7e17-5f33-4b0b-a045-eb4e71c6f1b3","Type":"ContainerStarted","Data":"7ceaa0ff834891fe5ced93d2f3ccddbb71f8e16ab0a3cc48877508c58faac211"} Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.315622 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gwpvv" event={"ID":"b2409836-005c-4e36-ae98-14a3053117d1","Type":"ContainerStarted","Data":"b87e2b6161e6a9bedb3e5daf2278b2ea3f59f7e9156e36d886f446fe1826a19e"} Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.316746 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-7988v" event={"ID":"84613154-8790-4a63-9359-bdfbdaaaee0c","Type":"ContainerStarted","Data":"1c7fa2ebe27634a4baa36981c018eea67efd6e80bd841b51b9e505eb5de13e59"} Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.317768 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pt5zn" event={"ID":"29e6ec47-1ff7-4c33-814a-3e25e3c40a88","Type":"ContainerStarted","Data":"0ce489e68b834bc9bfa6ef9d631b93de3e91678af067a23d5d980482f2afd785"} Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.320308 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mq2pr" event={"ID":"5bd54177-64af-4b6a-952a-b1fce803a911","Type":"ContainerStarted","Data":"bc8ae2c3dd569b734fbad5ef7adeb035f8348f9318175cc23d12e669d74892a1"} Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.323018 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqrgj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.323063 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hqrgj" podUID="57c96cff-592a-47c8-a038-6bb23bac6aa5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.340464 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:45 crc kubenswrapper[4805]: E1203 00:08:45.340943 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:45.840924747 +0000 UTC m=+149.689887353 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.441310 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:45 crc kubenswrapper[4805]: E1203 00:08:45.442695 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:45.942666026 +0000 UTC m=+149.791628632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:45 crc kubenswrapper[4805]: W1203 00:08:45.522618 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod742c5936_25e8_4b3f_82ec_e9a0125b855c.slice/crio-20a6e971053dc0319b43d4e529f68bffced86d4ca792ccfaea0db33d8c786025 WatchSource:0}: Error finding container 20a6e971053dc0319b43d4e529f68bffced86d4ca792ccfaea0db33d8c786025: Status 404 returned error can't find the container with id 20a6e971053dc0319b43d4e529f68bffced86d4ca792ccfaea0db33d8c786025 Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.534811 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc"] Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.544674 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:45 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:08:45 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:08:45 crc kubenswrapper[4805]: healthz check failed Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.545802 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.545403 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-hqrgj" podStartSLOduration=130.545379729 podStartE2EDuration="2m10.545379729s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:45.541883962 +0000 UTC m=+149.390846578" watchObservedRunningTime="2025-12-03 00:08:45.545379729 +0000 UTC m=+149.394342335" Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.555592 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:45 crc kubenswrapper[4805]: E1203 00:08:45.556614 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:46.056596603 +0000 UTC m=+149.905559209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.638659 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7sxvt"] Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.642532 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fb2qm"] Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.648622 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-642ng"] Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.656773 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:45 crc kubenswrapper[4805]: E1203 00:08:45.657032 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:46.156993618 +0000 UTC m=+150.005956224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.657467 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:45 crc kubenswrapper[4805]: E1203 00:08:45.658343 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:46.158330632 +0000 UTC m=+150.007293238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.673943 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzrzs" podStartSLOduration=130.673920005 podStartE2EDuration="2m10.673920005s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:45.672602612 +0000 UTC m=+149.521565228" watchObservedRunningTime="2025-12-03 00:08:45.673920005 +0000 UTC m=+149.522882611" Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.706586 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5fd7z" podStartSLOduration=130.706564049 podStartE2EDuration="2m10.706564049s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:45.705151243 +0000 UTC m=+149.554113869" watchObservedRunningTime="2025-12-03 00:08:45.706564049 +0000 UTC m=+149.555526655" Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.759140 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:45 crc kubenswrapper[4805]: E1203 00:08:45.759406 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:46.259366103 +0000 UTC m=+150.108328709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.759700 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:45 crc kubenswrapper[4805]: E1203 00:08:45.760228 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:46.260188364 +0000 UTC m=+150.109150970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.829004 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-27wk7"] Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.833387 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dx5d9"] Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.860131 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6nkz"] Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.861031 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9lwf7"] Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.861413 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:45 crc kubenswrapper[4805]: E1203 00:08:45.861885 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:46.36183935 +0000 UTC m=+150.210801966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:45 crc kubenswrapper[4805]: W1203 00:08:45.862716 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb51558fc_0e2d_4299_a606_cbbe8992836c.slice/crio-79e27337c7eddab63d9a438a0394899ce84f4d684a991e06fbaed5651a962ec1 WatchSource:0}: Error finding container 79e27337c7eddab63d9a438a0394899ce84f4d684a991e06fbaed5651a962ec1: Status 404 returned error can't find the container with id 79e27337c7eddab63d9a438a0394899ce84f4d684a991e06fbaed5651a962ec1 Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.897344 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-d7x48"] Dec 03 00:08:45 crc kubenswrapper[4805]: W1203 00:08:45.900457 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61783bcf_b91f_4ba1_bfe5_7c4f0207cbc1.slice/crio-46ada5a7e5ad6339391fbf5170bc34775efa8b078b765d542b9ec7d61f4d6d30 WatchSource:0}: Error finding container 46ada5a7e5ad6339391fbf5170bc34775efa8b078b765d542b9ec7d61f4d6d30: Status 404 returned error can't find the container with id 46ada5a7e5ad6339391fbf5170bc34775efa8b078b765d542b9ec7d61f4d6d30 Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.929439 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-64tvm"] Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.934751 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-46lxq"] Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.938901 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb48s"] Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.947685 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq"] Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.949989 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tpd8x"] Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.963634 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:45 crc kubenswrapper[4805]: E1203 00:08:45.964646 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:46.464632186 +0000 UTC m=+150.313594792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.973713 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6jjf6"] Dec 03 00:08:45 crc kubenswrapper[4805]: I1203 00:08:45.980463 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xk9qc"] Dec 03 00:08:46 crc kubenswrapper[4805]: W1203 00:08:46.005222 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc134545f_c51b_4a9b_8604_78d2a46de64d.slice/crio-a21553d1a95ce627e3e7b32ef295ee330305513ea9089b73c0c9443b0af3c4e8 WatchSource:0}: Error finding container a21553d1a95ce627e3e7b32ef295ee330305513ea9089b73c0c9443b0af3c4e8: Status 404 returned error can't find the container with id a21553d1a95ce627e3e7b32ef295ee330305513ea9089b73c0c9443b0af3c4e8 Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.024986 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-s77db"] Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.029938 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9rm6z"] Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.035512 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nlnzd"] Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.036605 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ncp6l"] Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.065212 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:46 crc kubenswrapper[4805]: E1203 00:08:46.065651 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:46.565626646 +0000 UTC m=+150.414589252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.167152 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:46 crc kubenswrapper[4805]: E1203 00:08:46.167644 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:46.667623922 +0000 UTC m=+150.516586528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.268661 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:46 crc kubenswrapper[4805]: E1203 00:08:46.268815 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:46.768779956 +0000 UTC m=+150.617742562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.269028 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:46 crc kubenswrapper[4805]: E1203 00:08:46.269440 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:46.769426292 +0000 UTC m=+150.618388898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.326744 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2lqj" event={"ID":"d0be7e17-5f33-4b0b-a045-eb4e71c6f1b3","Type":"ContainerStarted","Data":"a7a79a05921fa621d92916d4688789341348be177631c48b63e923afff272f34"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.327707 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fb2qm" event={"ID":"9594ceca-a9f4-497a-876c-845411320228","Type":"ContainerStarted","Data":"f82b65c984882ea3fd627ea08e39532794728044298318f4054c7cd79a4c6c47"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.329188 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pt5zn" event={"ID":"29e6ec47-1ff7-4c33-814a-3e25e3c40a88","Type":"ContainerStarted","Data":"bb2ac543012ada0ba9b5e013433b353de1bb8f7bcff367f9c51f66d0e6dcfd70"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.330231 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-d7x48" event={"ID":"c134545f-c51b-4a9b-8604-78d2a46de64d","Type":"ContainerStarted","Data":"a21553d1a95ce627e3e7b32ef295ee330305513ea9089b73c0c9443b0af3c4e8"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.331544 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-499xn" event={"ID":"8279cb46-b6c7-4f7e-a572-f52bfecfaada","Type":"ContainerStarted","Data":"2ad636c11530b135275471402e95e1bd7638bb75b00bfd15a462f34ea85d21b7"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.332661 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dx5d9" event={"ID":"61783bcf-b91f-4ba1-bfe5-7c4f0207cbc1","Type":"ContainerStarted","Data":"46ada5a7e5ad6339391fbf5170bc34775efa8b078b765d542b9ec7d61f4d6d30"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.334098 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e28ba0d116199e442faefbb5c61b0f07a0bc34992f1a16328578ddd03fd3f30e"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.335458 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-7988v" event={"ID":"84613154-8790-4a63-9359-bdfbdaaaee0c","Type":"ContainerStarted","Data":"a58172f1e0d75d43cdb155d5438377086c8352daf90f532b12ced7303ee97183"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.336772 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9lwf7" event={"ID":"ae6f9f11-3892-49dc-848c-e40db6d4629b","Type":"ContainerStarted","Data":"babac506637f2fc359abd581ec0376b43d1ed665621a569bfb92ed3e8aece420"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.337724 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" event={"ID":"adbdf2c1-efd2-4036-b9be-e32b0ca196db","Type":"ContainerStarted","Data":"5325be16a7c4287403fe931ca9ab66039347d03ddb98080b4e45948783f55ee3"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.339238 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hsnws" event={"ID":"daf7d226-4f5d-4112-b02a-9eaae61a6d74","Type":"ContainerStarted","Data":"c8bdc6baf9d3011946777d45265a06c99e9482c06802a98837a4a2683ce4588f"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.340743 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"6cc1af2dc378826d581eb8b6c2c2418a78521ea4484a870b369fb40588d9d6ca"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.348929 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29412000-pvwqn" event={"ID":"54f1e878-22b1-43ab-9225-7212ec9633e7","Type":"ContainerStarted","Data":"f8d443565a73af39e266434f0c6565b2b648675acbec22ecb7b447fa691617f9"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.348984 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29412000-pvwqn" event={"ID":"54f1e878-22b1-43ab-9225-7212ec9633e7","Type":"ContainerStarted","Data":"6d6266b28de12cc4ff50f57a0db1b7859d5e6522425d25ef639ffaf01e0d4e9f"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.351613 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-642ng" event={"ID":"698014dc-c4df-4eae-b761-3f5192f6492a","Type":"ContainerStarted","Data":"e69de2d1a666a2871828b49ff4eff7063850d504e04eac0da00d7c225eb83e18"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.353556 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gwpvv" event={"ID":"b2409836-005c-4e36-ae98-14a3053117d1","Type":"ContainerStarted","Data":"14fcf4790435d3772bcd8b5cb99a74a73efa961f68610bd2a9b583b8490ce1c7"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.355404 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6nkz" event={"ID":"88be45c2-86f5-4e59-8a8d-903a7b898560","Type":"ContainerStarted","Data":"3e65f88c4ae1e538972b14eec4f2ad968da27de4ba6d2f65904f521ec825b046"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.360810 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" event={"ID":"e608454a-5352-4a77-80cb-5294bd1ae980","Type":"ContainerStarted","Data":"dc54116b4db6281d69dea5ba7b36a54717c23880eaccb7998d1aab8c7c02d12a"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.362593 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6ltm8" event={"ID":"5b43c926-4b2b-4560-874a-25662916e05e","Type":"ContainerStarted","Data":"1cfeb0701563541b7a61232b265669a8f31d579c7a3906fef1bc2baffe81f757"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.364543 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d2d51bcd69f68ae5a75e29f62aa78e7f5072be67e0e68763789c80bd255a6b86"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.364687 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.366507 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" event={"ID":"ea247096-f6e0-490e-8fdd-3d6b6ce7a787","Type":"ContainerStarted","Data":"d8a4827fd15ca32907e9a0b422f14dee422d1fa444b568bfcc1b4f4aff8be17b"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.366784 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.368309 4805 generic.go:334] "Generic (PLEG): container finished" podID="efd736f9-2ca3-40e9-b51a-25f95ff4529c" containerID="72c9f3129f96e22614a4edb2ba35e2c9f9b3ee1b097703bf380feb13532934fa" exitCode=0 Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.368359 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" event={"ID":"efd736f9-2ca3-40e9-b51a-25f95ff4529c","Type":"ContainerDied","Data":"72c9f3129f96e22614a4edb2ba35e2c9f9b3ee1b097703bf380feb13532934fa"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.370046 4805 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-k8744 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.370305 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" podUID="ea247096-f6e0-490e-8fdd-3d6b6ce7a787" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.370925 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:46 crc kubenswrapper[4805]: E1203 00:08:46.371101 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:46.871068519 +0000 UTC m=+150.720031135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.370991 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wzx9r" event={"ID":"eb1a5f57-d662-40f5-96a5-bf9ca852e368","Type":"ContainerStarted","Data":"083bffa09ef433b9afb89f32dcef6a4da4ccf426fc0a45bb688139de0cfaa431"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.371443 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:46 crc kubenswrapper[4805]: E1203 00:08:46.371846 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:46.871832818 +0000 UTC m=+150.720795424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.372741 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-kn8zr" event={"ID":"2e9e2297-dd89-42c0-a954-65ef398b4618","Type":"ContainerStarted","Data":"fd6f8c82db95d9472ab635601539e5813aee968e3a21c55d36252ca1671d5246"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.373169 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29412000-pvwqn" podStartSLOduration=131.373134961 podStartE2EDuration="2m11.373134961s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:46.370390362 +0000 UTC m=+150.219352998" watchObservedRunningTime="2025-12-03 00:08:46.373134961 +0000 UTC m=+150.222097567" Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.373527 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-hsnws" podStartSLOduration=130.373518361 podStartE2EDuration="2m10.373518361s" podCreationTimestamp="2025-12-03 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:46.357643139 +0000 UTC m=+150.206605745" watchObservedRunningTime="2025-12-03 00:08:46.373518361 +0000 UTC m=+150.222480987" Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.374405 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" event={"ID":"976c4d52-a36d-43f0-ae70-921f30051080","Type":"ContainerStarted","Data":"fefcc9ea32b86f6445e57bc3d8633adaa8037d0a4d289d6f031fbc076fceb4e7"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.374677 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.375726 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-27wk7" event={"ID":"b51558fc-0e2d-4299-a606-cbbe8992836c","Type":"ContainerStarted","Data":"79e27337c7eddab63d9a438a0394899ce84f4d684a991e06fbaed5651a962ec1"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.376879 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7sxvt" event={"ID":"2cd88982-bc5c-4811-9794-b7342f16d887","Type":"ContainerStarted","Data":"68568180e13c6e771e760db3982d0ed4d9030f77fa9f6868836408073147d1f4"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.378479 4805 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-bfzn8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.378549 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" podUID="976c4d52-a36d-43f0-ae70-921f30051080" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.379025 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mq2pr" event={"ID":"5bd54177-64af-4b6a-952a-b1fce803a911","Type":"ContainerStarted","Data":"251fafdb3e351a9132658ede59b010e8722c80c7a2bfba0d00c5b5551beb9db4"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.382470 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" event={"ID":"742c5936-25e8-4b3f-82ec-e9a0125b855c","Type":"ContainerStarted","Data":"20a6e971053dc0319b43d4e529f68bffced86d4ca792ccfaea0db33d8c786025"} Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.399231 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" podStartSLOduration=131.399188198 podStartE2EDuration="2m11.399188198s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:46.397983228 +0000 UTC m=+150.246945854" watchObservedRunningTime="2025-12-03 00:08:46.399188198 +0000 UTC m=+150.248150814" Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.412971 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mq2pr" podStartSLOduration=131.412945776 podStartE2EDuration="2m11.412945776s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:46.411337306 +0000 UTC m=+150.260299902" watchObservedRunningTime="2025-12-03 00:08:46.412945776 +0000 UTC m=+150.261908382" Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.466333 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" podStartSLOduration=130.466312883 podStartE2EDuration="2m10.466312883s" podCreationTimestamp="2025-12-03 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:46.463521823 +0000 UTC m=+150.312484439" watchObservedRunningTime="2025-12-03 00:08:46.466312883 +0000 UTC m=+150.315275489" Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.474448 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:46 crc kubenswrapper[4805]: E1203 00:08:46.475523 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:46.975486046 +0000 UTC m=+150.824448652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.478843 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:46 crc kubenswrapper[4805]: E1203 00:08:46.483710 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:46.983671711 +0000 UTC m=+150.832634317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.504521 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:46 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:08:46 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:08:46 crc kubenswrapper[4805]: healthz check failed Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.504573 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:46 crc kubenswrapper[4805]: W1203 00:08:46.532373 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22c00906_1dcc_4400_8cae_dc050595ee91.slice/crio-7686d5ff8d732c5a342c81c94fcaef4794bf9e62a2dbee53b391d16591594838 WatchSource:0}: Error finding container 7686d5ff8d732c5a342c81c94fcaef4794bf9e62a2dbee53b391d16591594838: Status 404 returned error can't find the container with id 7686d5ff8d732c5a342c81c94fcaef4794bf9e62a2dbee53b391d16591594838 Dec 03 00:08:46 crc kubenswrapper[4805]: W1203 00:08:46.536851 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a7d00a4_3b99_44ea_b608_376ac0866cf2.slice/crio-172135d628e54d6d3219f0e25baca0d11d31c3f5d99be0a8804cd09bb9e2dcd4 WatchSource:0}: Error finding container 172135d628e54d6d3219f0e25baca0d11d31c3f5d99be0a8804cd09bb9e2dcd4: Status 404 returned error can't find the container with id 172135d628e54d6d3219f0e25baca0d11d31c3f5d99be0a8804cd09bb9e2dcd4 Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.580552 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:46 crc kubenswrapper[4805]: E1203 00:08:46.580767 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:47.080729043 +0000 UTC m=+150.929691649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.580824 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:46 crc kubenswrapper[4805]: E1203 00:08:46.581340 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:47.081333198 +0000 UTC m=+150.930295804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.682364 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:46 crc kubenswrapper[4805]: E1203 00:08:46.682558 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:47.182530674 +0000 UTC m=+151.031493280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.682765 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:46 crc kubenswrapper[4805]: E1203 00:08:46.683139 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:47.183125408 +0000 UTC m=+151.032088014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.711259 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6ltm8" podStartSLOduration=131.711238608 podStartE2EDuration="2m11.711238608s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:46.47845064 +0000 UTC m=+150.327413246" watchObservedRunningTime="2025-12-03 00:08:46.711238608 +0000 UTC m=+150.560201224" Dec 03 00:08:46 crc kubenswrapper[4805]: W1203 00:08:46.714100 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb160ac4_0c55_4a33_86b0_c8026712e657.slice/crio-db98e8317f620597a80e3a4a7d91880f826c1df245d8884adae6c4075160df99 WatchSource:0}: Error finding container db98e8317f620597a80e3a4a7d91880f826c1df245d8884adae6c4075160df99: Status 404 returned error can't find the container with id db98e8317f620597a80e3a4a7d91880f826c1df245d8884adae6c4075160df99 Dec 03 00:08:46 crc kubenswrapper[4805]: W1203 00:08:46.719172 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa7b63c1_20b5_4963_b55a_48b6d75052a5.slice/crio-c4a6f494102174ade356a1c68126500fef198bf4455700d85d2f93d5ede138a2 WatchSource:0}: Error finding container c4a6f494102174ade356a1c68126500fef198bf4455700d85d2f93d5ede138a2: Status 404 returned error can't find the container with id c4a6f494102174ade356a1c68126500fef198bf4455700d85d2f93d5ede138a2 Dec 03 00:08:46 crc kubenswrapper[4805]: W1203 00:08:46.725701 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61cfa86d_9594_400f_991c_4819838ee49d.slice/crio-bdff571b666ef32e8326f4d877c02e76ca52e704d83b72315fc2469c96860d12 WatchSource:0}: Error finding container bdff571b666ef32e8326f4d877c02e76ca52e704d83b72315fc2469c96860d12: Status 404 returned error can't find the container with id bdff571b666ef32e8326f4d877c02e76ca52e704d83b72315fc2469c96860d12 Dec 03 00:08:46 crc kubenswrapper[4805]: W1203 00:08:46.757588 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02d45a95_1e04_446b_a85f_af0dc2d6d453.slice/crio-4cd48daf5f9144920a30c26e219ecfeaef7b322fb9e06a575aa137f02556bc70 WatchSource:0}: Error finding container 4cd48daf5f9144920a30c26e219ecfeaef7b322fb9e06a575aa137f02556bc70: Status 404 returned error can't find the container with id 4cd48daf5f9144920a30c26e219ecfeaef7b322fb9e06a575aa137f02556bc70 Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.810066 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:46 crc kubenswrapper[4805]: E1203 00:08:46.810617 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:47.310591427 +0000 UTC m=+151.159554033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:46 crc kubenswrapper[4805]: I1203 00:08:46.912175 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:46 crc kubenswrapper[4805]: E1203 00:08:46.912640 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:47.412621803 +0000 UTC m=+151.261584409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.014435 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:47 crc kubenswrapper[4805]: E1203 00:08:47.014923 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:47.514899156 +0000 UTC m=+151.363861762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.116213 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:47 crc kubenswrapper[4805]: E1203 00:08:47.116699 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:47.616679336 +0000 UTC m=+151.465641942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.223171 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:47 crc kubenswrapper[4805]: E1203 00:08:47.224067 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:47.724039876 +0000 UTC m=+151.573002492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.330350 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:47 crc kubenswrapper[4805]: E1203 00:08:47.330802 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:47.830781121 +0000 UTC m=+151.679743727 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.432179 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:47 crc kubenswrapper[4805]: E1203 00:08:47.432615 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:47.932590172 +0000 UTC m=+151.781552778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.486211 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-642ng" event={"ID":"698014dc-c4df-4eae-b761-3f5192f6492a","Type":"ContainerStarted","Data":"97fa549a76c6791437b1ddb8dfdae19402cf0bbff99dab5d270603347d43fced"} Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.495684 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-46lxq" event={"ID":"9a7d00a4-3b99-44ea-b608-376ac0866cf2","Type":"ContainerStarted","Data":"172135d628e54d6d3219f0e25baca0d11d31c3f5d99be0a8804cd09bb9e2dcd4"} Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.498302 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7sxvt" event={"ID":"2cd88982-bc5c-4811-9794-b7342f16d887","Type":"ContainerStarted","Data":"e4bfc200bdad5396906d44fbe2a971fbfcb94dc0fab6384a4007e48ca652419c"} Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.501127 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nlnzd" event={"ID":"02d45a95-1e04-446b-a85f-af0dc2d6d453","Type":"ContainerStarted","Data":"4cd48daf5f9144920a30c26e219ecfeaef7b322fb9e06a575aa137f02556bc70"} Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.505923 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:47 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:08:47 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:08:47 crc kubenswrapper[4805]: healthz check failed Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.506032 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.587972 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pq2qs" event={"ID":"9fd2f5bb-569b-42a2-9e7e-b6309d58eec0","Type":"ContainerStarted","Data":"9131d96395f0559b0b5a6d705d176d2e81a96bc63aae3ae7b508e2a7902d3de5"} Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.589053 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:47 crc kubenswrapper[4805]: E1203 00:08:47.589760 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:48.08974066 +0000 UTC m=+151.938703266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.596804 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq" event={"ID":"fd1a9d56-21a3-450e-b9af-fc132ee10466","Type":"ContainerStarted","Data":"603ed9fa26684ec3f79384a671e62394d70d7c2141d65dbb1b0e58279ca48313"} Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.598548 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb48s" event={"ID":"4f26c935-53de-4459-862f-4d5fbbe97e88","Type":"ContainerStarted","Data":"32bca15616fd8620ca3cdfcee2db1e58abca0ce825a9a46e04fc12cb898abe3e"} Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.599632 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" event={"ID":"adbdf2c1-efd2-4036-b9be-e32b0ca196db","Type":"ContainerStarted","Data":"1a060c8c23a33f939eddf0463a466851b500602bbcddce77e4d6b568ab17ebe3"} Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.600868 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.618533 4805 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-995bc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" start-of-body= Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.618595 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" podUID="adbdf2c1-efd2-4036-b9be-e32b0ca196db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.619785 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fb2qm" event={"ID":"9594ceca-a9f4-497a-876c-845411320228","Type":"ContainerStarted","Data":"71e2e078bbdcb42bd917d308258a27419021d04f60ff43af0cf00f5ab530e52e"} Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.628664 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" event={"ID":"7c71a167-0d46-4202-b2b0-8e7eb6a0d932","Type":"ContainerStarted","Data":"8c5afebd1f3c22b5893c076a4a1f61cf359fce2625a92d5c3ee58719360cce0a"} Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.638411 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7sxvt" podStartSLOduration=132.638391669 podStartE2EDuration="2m12.638391669s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:47.635725581 +0000 UTC m=+151.484688187" watchObservedRunningTime="2025-12-03 00:08:47.638391669 +0000 UTC m=+151.487354275" Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.649574 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-64tvm" event={"ID":"22c00906-1dcc-4400-8cae-dc050595ee91","Type":"ContainerStarted","Data":"7686d5ff8d732c5a342c81c94fcaef4794bf9e62a2dbee53b391d16591594838"} Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.711372 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xk9qc" event={"ID":"fb160ac4-0c55-4a33-86b0-c8026712e657","Type":"ContainerStarted","Data":"db98e8317f620597a80e3a4a7d91880f826c1df245d8884adae6c4075160df99"} Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.711384 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:47 crc kubenswrapper[4805]: E1203 00:08:47.711464 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:48.211439363 +0000 UTC m=+152.060401969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.712263 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:47 crc kubenswrapper[4805]: E1203 00:08:47.713880 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:48.213872555 +0000 UTC m=+152.062835161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.723310 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jjf6" event={"ID":"67d3c3b7-f552-4372-ac94-0baf8aaadd78","Type":"ContainerStarted","Data":"93b53350ef0fc9582b1d1a21bc8e581361062a39a31d53adeef8b64eb6978eda"} Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.749798 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s77db" event={"ID":"aa7b63c1-20b5-4963-b55a-48b6d75052a5","Type":"ContainerStarted","Data":"c4a6f494102174ade356a1c68126500fef198bf4455700d85d2f93d5ede138a2"} Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.794404 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tpd8x" event={"ID":"ae42d7f8-a970-4db2-bd74-1d0bf073d607","Type":"ContainerStarted","Data":"f8d32a9eb120d15a5c1d32edb5d5b2b87a2e72606adb24448b1e798479518ca8"} Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.799243 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" event={"ID":"61cfa86d-9594-400f-991c-4819838ee49d","Type":"ContainerStarted","Data":"bdff571b666ef32e8326f4d877c02e76ca52e704d83b72315fc2469c96860d12"} Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.978063 4805 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-499xn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.978137 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-499xn" podUID="8279cb46-b6c7-4f7e-a572-f52bfecfaada" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.978749 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:47 crc kubenswrapper[4805]: I1203 00:08:47.980088 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" podStartSLOduration=132.980027215 podStartE2EDuration="2m12.980027215s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:47.978648991 +0000 UTC m=+151.827611597" watchObservedRunningTime="2025-12-03 00:08:47.980027215 +0000 UTC m=+151.828989821" Dec 03 00:08:47 crc kubenswrapper[4805]: E1203 00:08:47.980914 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:48.480883988 +0000 UTC m=+152.329846594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.028661 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-499xn" Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.028798 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-kn8zr" Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.031709 4805 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-k8744 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.031809 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" podUID="ea247096-f6e0-490e-8fdd-3d6b6ce7a787" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.031913 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.031940 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.045099 4805 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-bfzn8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.045253 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" podUID="976c4d52-a36d-43f0-ae70-921f30051080" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.054480 4805 patch_prober.go:28] interesting pod/console-operator-58897d9998-kn8zr container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.054604 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-kn8zr" podUID="2e9e2297-dd89-42c0-a954-65ef398b4618" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.161024 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:48 crc kubenswrapper[4805]: E1203 00:08:48.258892 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:48.758821945 +0000 UTC m=+152.607784541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.263962 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:48 crc kubenswrapper[4805]: E1203 00:08:48.264919 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:48.764891859 +0000 UTC m=+152.613854465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.378286 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:48 crc kubenswrapper[4805]: E1203 00:08:48.378727 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:48.878712703 +0000 UTC m=+152.727675309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.426922 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-fb2qm" podStartSLOduration=133.426899029 podStartE2EDuration="2m13.426899029s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:48.369240683 +0000 UTC m=+152.218203309" watchObservedRunningTime="2025-12-03 00:08:48.426899029 +0000 UTC m=+152.275861635" Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.427658 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pq2qs" podStartSLOduration=133.427652368 podStartE2EDuration="2m13.427652368s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:48.424830817 +0000 UTC m=+152.273793423" watchObservedRunningTime="2025-12-03 00:08:48.427652368 +0000 UTC m=+152.276614974" Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.483441 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:48 crc kubenswrapper[4805]: E1203 00:08:48.483744 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:48.983716654 +0000 UTC m=+152.832679260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.508512 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:48 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:08:48 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:08:48 crc kubenswrapper[4805]: healthz check failed Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.508608 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.522181 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-7988v" podStartSLOduration=8.522154224 podStartE2EDuration="8.522154224s" podCreationTimestamp="2025-12-03 00:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:48.519323733 +0000 UTC m=+152.368286349" watchObservedRunningTime="2025-12-03 00:08:48.522154224 +0000 UTC m=+152.371116830" Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.555704 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-kn8zr" podStartSLOduration=133.55567266 podStartE2EDuration="2m13.55567266s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:48.553738892 +0000 UTC m=+152.402701508" watchObservedRunningTime="2025-12-03 00:08:48.55567266 +0000 UTC m=+152.404635276" Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.587051 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:48 crc kubenswrapper[4805]: E1203 00:08:48.587410 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:49.087396812 +0000 UTC m=+152.936359408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.634552 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-slg54" podStartSLOduration=133.634528612 podStartE2EDuration="2m13.634528612s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:48.592011098 +0000 UTC m=+152.440973704" watchObservedRunningTime="2025-12-03 00:08:48.634528612 +0000 UTC m=+152.483491218" Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.635116 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-499xn" podStartSLOduration=133.635110587 podStartE2EDuration="2m13.635110587s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:48.63287553 +0000 UTC m=+152.481838146" watchObservedRunningTime="2025-12-03 00:08:48.635110587 +0000 UTC m=+152.484073193" Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.679356 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2lqj" podStartSLOduration=133.679334343 podStartE2EDuration="2m13.679334343s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:48.677406934 +0000 UTC m=+152.526369550" watchObservedRunningTime="2025-12-03 00:08:48.679334343 +0000 UTC m=+152.528296949" Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.687744 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:48 crc kubenswrapper[4805]: E1203 00:08:48.688149 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:49.188123165 +0000 UTC m=+153.037085771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.696804 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-pt5zn" podStartSLOduration=133.696788084 podStartE2EDuration="2m13.696788084s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:48.694655501 +0000 UTC m=+152.543618117" watchObservedRunningTime="2025-12-03 00:08:48.696788084 +0000 UTC m=+152.545750690" Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.789976 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:48 crc kubenswrapper[4805]: E1203 00:08:48.790496 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:49.29047518 +0000 UTC m=+153.139437846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.803949 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gwpvv" event={"ID":"b2409836-005c-4e36-ae98-14a3053117d1","Type":"ContainerStarted","Data":"669d26e3ce09df2b6ceb9a863a9be3eacd8d3be738dac5182eb395cf5823fcc2"} Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.807228 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-27wk7" event={"ID":"b51558fc-0e2d-4299-a606-cbbe8992836c","Type":"ContainerStarted","Data":"694fa80f7a2dfd5eee9fd9905343768ce01fb3f08ef919c8e58442d51260f11e"} Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.807839 4805 patch_prober.go:28] interesting pod/console-operator-58897d9998-kn8zr container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.807884 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-kn8zr" podUID="2e9e2297-dd89-42c0-a954-65ef398b4618" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.808145 4805 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-499xn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.808254 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-499xn" podUID="8279cb46-b6c7-4f7e-a572-f52bfecfaada" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.816016 4805 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-995bc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" start-of-body= Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.816118 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" podUID="adbdf2c1-efd2-4036-b9be-e32b0ca196db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.827367 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-gwpvv" podStartSLOduration=133.82733766 podStartE2EDuration="2m13.82733766s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:48.818502367 +0000 UTC m=+152.667464983" watchObservedRunningTime="2025-12-03 00:08:48.82733766 +0000 UTC m=+152.676300266" Dec 03 00:08:48 crc kubenswrapper[4805]: I1203 00:08:48.901031 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:48 crc kubenswrapper[4805]: E1203 00:08:48.901460 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:49.401435861 +0000 UTC m=+153.250398467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.009276 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:49 crc kubenswrapper[4805]: E1203 00:08:49.009965 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:49.509944031 +0000 UTC m=+153.358906837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.112213 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:49 crc kubenswrapper[4805]: E1203 00:08:49.112819 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:49.612794918 +0000 UTC m=+153.461757524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.237605 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:49 crc kubenswrapper[4805]: E1203 00:08:49.238012 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:49.73799861 +0000 UTC m=+153.586961216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.339277 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:49 crc kubenswrapper[4805]: E1203 00:08:49.339532 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:49.839505512 +0000 UTC m=+153.688468118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.502497 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:49 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:08:49 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:08:49 crc kubenswrapper[4805]: healthz check failed Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.502541 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.503071 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:49 crc kubenswrapper[4805]: E1203 00:08:49.503439 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:50.003427932 +0000 UTC m=+153.852390538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.606672 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:49 crc kubenswrapper[4805]: E1203 00:08:49.606947 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:50.106926045 +0000 UTC m=+153.955888651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.709569 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:49 crc kubenswrapper[4805]: E1203 00:08:49.710472 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:50.210453289 +0000 UTC m=+154.059415895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.810398 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:49 crc kubenswrapper[4805]: E1203 00:08:49.810734 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:50.310709251 +0000 UTC m=+154.159671857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.917142 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:49 crc kubenswrapper[4805]: E1203 00:08:49.917726 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:50.417708032 +0000 UTC m=+154.266670638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.939583 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" event={"ID":"61cfa86d-9594-400f-991c-4819838ee49d","Type":"ContainerStarted","Data":"978de6011fd828ff799cf647094509777fe4309c49daa9819685c7dbac9b2e74"} Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.939958 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.952848 4805 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9rm6z container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.952926 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" podUID="61cfa86d-9594-400f-991c-4819838ee49d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.969244 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6nkz" event={"ID":"88be45c2-86f5-4e59-8a8d-903a7b898560","Type":"ContainerStarted","Data":"d61a8c57b615e14357bd1944bfc135d6bb56e13ebecd2d77ab445b9ae95230ae"} Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.971305 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-d7x48" event={"ID":"c134545f-c51b-4a9b-8604-78d2a46de64d","Type":"ContainerStarted","Data":"d903a2754c4a1f3b6d16d4abb233b406d77e696c15f3e096700ba15ba9f8459d"} Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.972370 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9lwf7" event={"ID":"ae6f9f11-3892-49dc-848c-e40db6d4629b","Type":"ContainerStarted","Data":"4abf0975dd2da0545f3126caff3d7ed31064a0e0820647cd10a603646899f00e"} Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.975573 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-27wk7" podStartSLOduration=134.975550512 podStartE2EDuration="2m14.975550512s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:48.838518032 +0000 UTC m=+152.687480648" watchObservedRunningTime="2025-12-03 00:08:49.975550512 +0000 UTC m=+153.824513118" Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.976481 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" podStartSLOduration=134.976473226 podStartE2EDuration="2m14.976473226s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:49.975391778 +0000 UTC m=+153.824354384" watchObservedRunningTime="2025-12-03 00:08:49.976473226 +0000 UTC m=+153.825435832" Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.992856 4805 generic.go:334] "Generic (PLEG): container finished" podID="742c5936-25e8-4b3f-82ec-e9a0125b855c" containerID="cc7794a4883f2d36b07055bfadfe81f17b329efeeda1a7641c80d4069bbc53af" exitCode=0 Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.992921 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" event={"ID":"742c5936-25e8-4b3f-82ec-e9a0125b855c","Type":"ContainerDied","Data":"cc7794a4883f2d36b07055bfadfe81f17b329efeeda1a7641c80d4069bbc53af"} Dec 03 00:08:49 crc kubenswrapper[4805]: I1203 00:08:49.998446 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9lwf7" podStartSLOduration=133.998426451 podStartE2EDuration="2m13.998426451s" podCreationTimestamp="2025-12-03 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:49.997602829 +0000 UTC m=+153.846565445" watchObservedRunningTime="2025-12-03 00:08:49.998426451 +0000 UTC m=+153.847389057" Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.002590 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dx5d9" event={"ID":"61783bcf-b91f-4ba1-bfe5-7c4f0207cbc1","Type":"ContainerStarted","Data":"4dfd7859ea370579e43da4f1746a8fac9fa40ea450e86fd892ab20f10771a0b6"} Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.022501 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:50 crc kubenswrapper[4805]: E1203 00:08:50.023430 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:50.523409502 +0000 UTC m=+154.372372108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.045455 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-642ng" event={"ID":"698014dc-c4df-4eae-b761-3f5192f6492a","Type":"ContainerStarted","Data":"e4c3a664248a86f8ef21b1ae942b70d50abb9f0104bd4047fc87838cf27a3056"} Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.072403 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wzx9r" event={"ID":"eb1a5f57-d662-40f5-96a5-bf9ca852e368","Type":"ContainerStarted","Data":"6ebe772cab31a5d572b1b03eefda8af9bf404006d20438dbf28eb028282473f6"} Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.081972 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jjf6" event={"ID":"67d3c3b7-f552-4372-ac94-0baf8aaadd78","Type":"ContainerStarted","Data":"689fc0fb269fbe6e6680fc7c93cbcac04f9de6bb82ff1c61a564c1a149a89dc0"} Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.111392 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s77db" event={"ID":"aa7b63c1-20b5-4963-b55a-48b6d75052a5","Type":"ContainerStarted","Data":"5667fe6b8cf0753ffaa82277756e4f7b808a7bd5225bf6bc531268c794383475"} Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.120039 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" event={"ID":"efd736f9-2ca3-40e9-b51a-25f95ff4529c","Type":"ContainerStarted","Data":"501e92555380472d22e18a64d04384a4e3507454b2465e6860176e46efecd511"} Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.122424 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq" event={"ID":"fd1a9d56-21a3-450e-b9af-fc132ee10466","Type":"ContainerStarted","Data":"268ac17880cd826d20dfae46c476da988e604b8fd722445aebac63eb4a9655f8"} Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.123554 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:50 crc kubenswrapper[4805]: E1203 00:08:50.123951 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:50.623937069 +0000 UTC m=+154.472899675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.125586 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tpd8x" event={"ID":"ae42d7f8-a970-4db2-bd74-1d0bf073d607","Type":"ContainerStarted","Data":"9982ba6a9b3a65d3dca96bd9d84072e624f0ed0361fea0a49b46e3550c92a68d"} Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.242338 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:50 crc kubenswrapper[4805]: E1203 00:08:50.243410 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:50.743383446 +0000 UTC m=+154.592346052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.271905 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dx5d9" podStartSLOduration=135.271878645 podStartE2EDuration="2m15.271878645s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:50.16674903 +0000 UTC m=+154.015711636" watchObservedRunningTime="2025-12-03 00:08:50.271878645 +0000 UTC m=+154.120841251" Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.272985 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-642ng" podStartSLOduration=135.272976453 podStartE2EDuration="2m15.272976453s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:50.272350236 +0000 UTC m=+154.121312872" watchObservedRunningTime="2025-12-03 00:08:50.272976453 +0000 UTC m=+154.121939069" Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.327621 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-tpd8x" podStartSLOduration=10.327602772 podStartE2EDuration="10.327602772s" podCreationTimestamp="2025-12-03 00:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:50.327020037 +0000 UTC m=+154.175982653" watchObservedRunningTime="2025-12-03 00:08:50.327602772 +0000 UTC m=+154.176565378" Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.328502 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-64tvm" event={"ID":"22c00906-1dcc-4400-8cae-dc050595ee91","Type":"ContainerStarted","Data":"a606a402c5cb40f0b243a18a13fefc9e7c335d120c4fee21f65e18ce697e3f0e"} Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.333942 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xk9qc" event={"ID":"fb160ac4-0c55-4a33-86b0-c8026712e657","Type":"ContainerStarted","Data":"49a38b25ee10b3b97dcc49e13bbd9d3293a1e6420ee58e64f78f8c168690c947"} Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.347117 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb48s" event={"ID":"4f26c935-53de-4459-862f-4d5fbbe97e88","Type":"ContainerStarted","Data":"e820f80486afbdc68e867dda25b6a5549d8933e223d88d9a5622f148c3a37b6f"} Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.347798 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb48s" Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.348700 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:50 crc kubenswrapper[4805]: E1203 00:08:50.354022 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:50.854003089 +0000 UTC m=+154.702965695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.406477 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nlnzd" event={"ID":"02d45a95-1e04-446b-a85f-af0dc2d6d453","Type":"ContainerStarted","Data":"c542fa9b8cb183f2a5f6afe73af7d233f178d21633dcde221a01c84982832dcb"} Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.409290 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq" podStartSLOduration=135.409271014 podStartE2EDuration="2m15.409271014s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:50.405805017 +0000 UTC m=+154.254767633" watchObservedRunningTime="2025-12-03 00:08:50.409271014 +0000 UTC m=+154.258233620" Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.417056 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-46lxq" event={"ID":"9a7d00a4-3b99-44ea-b608-376ac0866cf2","Type":"ContainerStarted","Data":"c9b4d329d1830f7fca8a66df5ee62499d97adb5885ced3f38cbf5fe0dff58c58"} Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.444489 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb48s" podStartSLOduration=135.444460433 podStartE2EDuration="2m15.444460433s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:50.436616555 +0000 UTC m=+154.285579181" watchObservedRunningTime="2025-12-03 00:08:50.444460433 +0000 UTC m=+154.293423039" Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.453814 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:50 crc kubenswrapper[4805]: E1203 00:08:50.453987 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:50.953949962 +0000 UTC m=+154.802912568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.454516 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:50 crc kubenswrapper[4805]: E1203 00:08:50.455340 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:50.955330437 +0000 UTC m=+154.804293043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.472446 4805 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-pb48s container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.472507 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb48s" podUID="4f26c935-53de-4459-862f-4d5fbbe97e88" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.505571 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:50 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:08:50 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:08:50 crc kubenswrapper[4805]: healthz check failed Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.505652 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.556100 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:50 crc kubenswrapper[4805]: E1203 00:08:50.556700 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:51.056676186 +0000 UTC m=+154.905638792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.582237 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.582905 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.584968 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.585263 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.594350 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.663451 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:50 crc kubenswrapper[4805]: E1203 00:08:50.664066 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:51.164051958 +0000 UTC m=+155.013014564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.764287 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.764713 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3149db2-454f-4e09-839a-95d7bd8d9ca6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a3149db2-454f-4e09-839a-95d7bd8d9ca6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.764797 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3149db2-454f-4e09-839a-95d7bd8d9ca6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a3149db2-454f-4e09-839a-95d7bd8d9ca6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 00:08:50 crc kubenswrapper[4805]: E1203 00:08:50.764942 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:51.264916974 +0000 UTC m=+155.113879580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.868342 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3149db2-454f-4e09-839a-95d7bd8d9ca6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a3149db2-454f-4e09-839a-95d7bd8d9ca6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.868420 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.868445 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3149db2-454f-4e09-839a-95d7bd8d9ca6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a3149db2-454f-4e09-839a-95d7bd8d9ca6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.868763 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3149db2-454f-4e09-839a-95d7bd8d9ca6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a3149db2-454f-4e09-839a-95d7bd8d9ca6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 00:08:50 crc kubenswrapper[4805]: E1203 00:08:50.869159 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:51.369137726 +0000 UTC m=+155.218100332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.886698 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3149db2-454f-4e09-839a-95d7bd8d9ca6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a3149db2-454f-4e09-839a-95d7bd8d9ca6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.924735 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 00:08:50 crc kubenswrapper[4805]: I1203 00:08:50.974189 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:50 crc kubenswrapper[4805]: E1203 00:08:50.974545 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:51.474518487 +0000 UTC m=+155.323481093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.107734 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:51 crc kubenswrapper[4805]: E1203 00:08:51.108607 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:51.608588032 +0000 UTC m=+155.457550648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.224639 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:51 crc kubenswrapper[4805]: E1203 00:08:51.224770 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:51.724744106 +0000 UTC m=+155.573706712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.225021 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:51 crc kubenswrapper[4805]: E1203 00:08:51.225496 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:51.725477164 +0000 UTC m=+155.574439770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.340796 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:51 crc kubenswrapper[4805]: E1203 00:08:51.341097 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:51.841046442 +0000 UTC m=+155.690009088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.443680 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:51 crc kubenswrapper[4805]: E1203 00:08:51.444270 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:51.944250497 +0000 UTC m=+155.793213123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.461919 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s77db" event={"ID":"aa7b63c1-20b5-4963-b55a-48b6d75052a5","Type":"ContainerStarted","Data":"c662d941fc248f37e6e23b703548de4bf5523f51c32107d2cba16b697db03549"} Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.465070 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xk9qc" event={"ID":"fb160ac4-0c55-4a33-86b0-c8026712e657","Type":"ContainerStarted","Data":"2522c41fe00e0e661503bd05619477ffac466cd7d30d355bd20143c3d0747272"} Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.479227 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" event={"ID":"efd736f9-2ca3-40e9-b51a-25f95ff4529c","Type":"ContainerStarted","Data":"a1a6c721dbaa953dff3df66a7e9ca8b45d9b196e184aa9ce3a6d0c7bb7dce44c"} Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.482295 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wzx9r" event={"ID":"eb1a5f57-d662-40f5-96a5-bf9ca852e368","Type":"ContainerStarted","Data":"f431d6e5e0b8db163ffb0555b95881406d10130e90bac3c82f625074db883046"} Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.490818 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jjf6" event={"ID":"67d3c3b7-f552-4372-ac94-0baf8aaadd78","Type":"ContainerStarted","Data":"bd1d0451c926ca9d9d7cd82b44f82956cd62f709c42f36003904b9a681b58813"} Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.498802 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:51 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:08:51 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:08:51 crc kubenswrapper[4805]: healthz check failed Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.498871 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.504462 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-d7x48" event={"ID":"c134545f-c51b-4a9b-8604-78d2a46de64d","Type":"ContainerStarted","Data":"0ca2f1f5463784e837c79e1d4834563358c70f6471cf92276da8310e476aeaf7"} Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.507705 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6nkz" event={"ID":"88be45c2-86f5-4e59-8a8d-903a7b898560","Type":"ContainerStarted","Data":"cf2198c511d00f7671c43e853386f23e9ea56c3071923e1de3394582b0ebb5b9"} Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.508445 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6nkz" Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.510263 4805 generic.go:334] "Generic (PLEG): container finished" podID="9a7d00a4-3b99-44ea-b608-376ac0866cf2" containerID="c9b4d329d1830f7fca8a66df5ee62499d97adb5885ced3f38cbf5fe0dff58c58" exitCode=0 Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.510340 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-46lxq" event={"ID":"9a7d00a4-3b99-44ea-b608-376ac0866cf2","Type":"ContainerDied","Data":"c9b4d329d1830f7fca8a66df5ee62499d97adb5885ced3f38cbf5fe0dff58c58"} Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.515098 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-64tvm" event={"ID":"22c00906-1dcc-4400-8cae-dc050595ee91","Type":"ContainerStarted","Data":"13f7609cf77479ccc8bf5c045fe1043a910790b68fc6304758abff4abf2366d7"} Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.518789 4805 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9rm6z container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.518834 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" podUID="61cfa86d-9594-400f-991c-4819838ee49d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.554814 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:51 crc kubenswrapper[4805]: E1203 00:08:51.555307 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:52.055278392 +0000 UTC m=+155.904240998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.555441 4805 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-pb48s container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.555486 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb48s" podUID="4f26c935-53de-4459-862f-4d5fbbe97e88" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.658366 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:51 crc kubenswrapper[4805]: E1203 00:08:51.661789 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:52.161773761 +0000 UTC m=+156.010736377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.669563 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s77db" podStartSLOduration=136.669543466 podStartE2EDuration="2m16.669543466s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:51.664089788 +0000 UTC m=+155.513052404" watchObservedRunningTime="2025-12-03 00:08:51.669543466 +0000 UTC m=+155.518506072" Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.793667 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:51 crc kubenswrapper[4805]: E1203 00:08:51.794137 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:52.294111222 +0000 UTC m=+156.143073828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.835141 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wzx9r" podStartSLOduration=136.835113658 podStartE2EDuration="2m16.835113658s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:51.834392929 +0000 UTC m=+155.683355545" watchObservedRunningTime="2025-12-03 00:08:51.835113658 +0000 UTC m=+155.684076274" Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.836258 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" podStartSLOduration=136.836247066 podStartE2EDuration="2m16.836247066s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:51.730532957 +0000 UTC m=+155.579495593" watchObservedRunningTime="2025-12-03 00:08:51.836247066 +0000 UTC m=+155.685209682" Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.879255 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-64tvm" podStartSLOduration=136.879231172 podStartE2EDuration="2m16.879231172s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:51.875914208 +0000 UTC m=+155.724876834" watchObservedRunningTime="2025-12-03 00:08:51.879231172 +0000 UTC m=+155.728193788" Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.895167 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:51 crc kubenswrapper[4805]: E1203 00:08:51.895633 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:52.395616726 +0000 UTC m=+156.244579332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.996028 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:51 crc kubenswrapper[4805]: E1203 00:08:51.996287 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:52.496217326 +0000 UTC m=+156.345179932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:51 crc kubenswrapper[4805]: I1203 00:08:51.996428 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:51 crc kubenswrapper[4805]: E1203 00:08:51.996868 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:52.496849431 +0000 UTC m=+156.345812027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.097407 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:52 crc kubenswrapper[4805]: E1203 00:08:52.097849 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:52.597820861 +0000 UTC m=+156.446783467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.199426 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:52 crc kubenswrapper[4805]: E1203 00:08:52.200042 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:52.700025481 +0000 UTC m=+156.548988087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.219457 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jjf6" podStartSLOduration=137.219436272 podStartE2EDuration="2m17.219436272s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:52.13461772 +0000 UTC m=+155.983580326" watchObservedRunningTime="2025-12-03 00:08:52.219436272 +0000 UTC m=+156.068398878" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.236680 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xk9qc" podStartSLOduration=137.236651596 podStartE2EDuration="2m17.236651596s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:52.216828506 +0000 UTC m=+156.065791122" watchObservedRunningTime="2025-12-03 00:08:52.236651596 +0000 UTC m=+156.085614202" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.300811 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:52 crc kubenswrapper[4805]: E1203 00:08:52.301181 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:52.801152885 +0000 UTC m=+156.650115491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.406338 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:52 crc kubenswrapper[4805]: E1203 00:08:52.406803 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:52.906786193 +0000 UTC m=+156.755748799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.459238 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6nkz" podStartSLOduration=137.459214636 podStartE2EDuration="2m17.459214636s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:52.455740379 +0000 UTC m=+156.304702995" watchObservedRunningTime="2025-12-03 00:08:52.459214636 +0000 UTC m=+156.308177262" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.460178 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-d7x48" podStartSLOduration=137.460171371 podStartE2EDuration="2m17.460171371s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:52.395971609 +0000 UTC m=+156.244934215" watchObservedRunningTime="2025-12-03 00:08:52.460171371 +0000 UTC m=+156.309133977" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.461269 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqrgj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.461315 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hqrgj" podUID="57c96cff-592a-47c8-a038-6bb23bac6aa5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.461679 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqrgj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.461699 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hqrgj" podUID="57c96cff-592a-47c8-a038-6bb23bac6aa5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.502819 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vtztk"] Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.504074 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-njft5" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.504871 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtztk" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.505970 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:52 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:08:52 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:08:52 crc kubenswrapper[4805]: healthz check failed Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.506022 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.507801 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:52 crc kubenswrapper[4805]: E1203 00:08:52.508144 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:53.008104831 +0000 UTC m=+156.857067437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.510087 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.523048 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vtztk"] Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.566601 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.576041 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nlnzd" event={"ID":"02d45a95-1e04-446b-a85f-af0dc2d6d453","Type":"ContainerStarted","Data":"a99acb612d35c92ac895391c9ae9c0f7b1a5b8eb49c4b4b6906ac35967e31475"} Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.600740 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-nlnzd" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.609164 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d6f0e6f-57cf-4586-8fe7-0df5145d4f33-catalog-content\") pod \"certified-operators-vtztk\" (UID: \"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33\") " pod="openshift-marketplace/certified-operators-vtztk" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.609267 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h49r9\" (UniqueName: \"kubernetes.io/projected/2d6f0e6f-57cf-4586-8fe7-0df5145d4f33-kube-api-access-h49r9\") pod \"certified-operators-vtztk\" (UID: \"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33\") " pod="openshift-marketplace/certified-operators-vtztk" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.609297 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.609317 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d6f0e6f-57cf-4586-8fe7-0df5145d4f33-utilities\") pod \"certified-operators-vtztk\" (UID: \"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33\") " pod="openshift-marketplace/certified-operators-vtztk" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.615154 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.615630 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-499xn" Dec 03 00:08:52 crc kubenswrapper[4805]: E1203 00:08:52.617118 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:53.117092223 +0000 UTC m=+156.966054819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.647933 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-htqft"] Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.649174 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-htqft" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.651030 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" event={"ID":"742c5936-25e8-4b3f-82ec-e9a0125b855c","Type":"ContainerStarted","Data":"7cfa9642aae8890a5e006da8b655e8fb1a3b409484115d82624c70c123409086"} Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.667458 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.713965 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-46lxq" event={"ID":"9a7d00a4-3b99-44ea-b608-376ac0866cf2","Type":"ContainerStarted","Data":"e5f238f1d2d1d3531434ef1df5f5947ec7b53b57f8f1c2e27192ec7ca3c4297d"} Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.714843 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-46lxq" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.715154 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.715325 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h49r9\" (UniqueName: \"kubernetes.io/projected/2d6f0e6f-57cf-4586-8fe7-0df5145d4f33-kube-api-access-h49r9\") pod \"certified-operators-vtztk\" (UID: \"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33\") " pod="openshift-marketplace/certified-operators-vtztk" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.715397 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d6f0e6f-57cf-4586-8fe7-0df5145d4f33-utilities\") pod \"certified-operators-vtztk\" (UID: \"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33\") " pod="openshift-marketplace/certified-operators-vtztk" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.715427 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfc64\" (UniqueName: \"kubernetes.io/projected/c6a69f1e-7226-4f13-8dce-fcfc6c25f240-kube-api-access-xfc64\") pod \"community-operators-htqft\" (UID: \"c6a69f1e-7226-4f13-8dce-fcfc6c25f240\") " pod="openshift-marketplace/community-operators-htqft" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.715512 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6a69f1e-7226-4f13-8dce-fcfc6c25f240-utilities\") pod \"community-operators-htqft\" (UID: \"c6a69f1e-7226-4f13-8dce-fcfc6c25f240\") " pod="openshift-marketplace/community-operators-htqft" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.715543 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6a69f1e-7226-4f13-8dce-fcfc6c25f240-catalog-content\") pod \"community-operators-htqft\" (UID: \"c6a69f1e-7226-4f13-8dce-fcfc6c25f240\") " pod="openshift-marketplace/community-operators-htqft" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.715646 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d6f0e6f-57cf-4586-8fe7-0df5145d4f33-catalog-content\") pod \"certified-operators-vtztk\" (UID: \"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33\") " pod="openshift-marketplace/certified-operators-vtztk" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.715896 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d6f0e6f-57cf-4586-8fe7-0df5145d4f33-utilities\") pod \"certified-operators-vtztk\" (UID: \"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33\") " pod="openshift-marketplace/certified-operators-vtztk" Dec 03 00:08:52 crc kubenswrapper[4805]: E1203 00:08:52.716012 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:53.21598711 +0000 UTC m=+157.064949716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.718652 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d6f0e6f-57cf-4586-8fe7-0df5145d4f33-catalog-content\") pod \"certified-operators-vtztk\" (UID: \"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33\") " pod="openshift-marketplace/certified-operators-vtztk" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.727042 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-kn8zr" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.727846 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" event={"ID":"7c71a167-0d46-4202-b2b0-8e7eb6a0d932","Type":"ContainerStarted","Data":"3359e608a7b83a66852ed6b2ac03834f674546571ab4a78ae4dd7cb1affa727e"} Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.729293 4805 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-pb48s container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.729342 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb48s" podUID="4f26c935-53de-4459-862f-4d5fbbe97e88" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.762964 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-htqft"] Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.813087 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h49r9\" (UniqueName: \"kubernetes.io/projected/2d6f0e6f-57cf-4586-8fe7-0df5145d4f33-kube-api-access-h49r9\") pod \"certified-operators-vtztk\" (UID: \"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33\") " pod="openshift-marketplace/certified-operators-vtztk" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.817825 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.817921 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfc64\" (UniqueName: \"kubernetes.io/projected/c6a69f1e-7226-4f13-8dce-fcfc6c25f240-kube-api-access-xfc64\") pod \"community-operators-htqft\" (UID: \"c6a69f1e-7226-4f13-8dce-fcfc6c25f240\") " pod="openshift-marketplace/community-operators-htqft" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.818034 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6a69f1e-7226-4f13-8dce-fcfc6c25f240-utilities\") pod \"community-operators-htqft\" (UID: \"c6a69f1e-7226-4f13-8dce-fcfc6c25f240\") " pod="openshift-marketplace/community-operators-htqft" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.818074 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6a69f1e-7226-4f13-8dce-fcfc6c25f240-catalog-content\") pod \"community-operators-htqft\" (UID: \"c6a69f1e-7226-4f13-8dce-fcfc6c25f240\") " pod="openshift-marketplace/community-operators-htqft" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.819340 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6a69f1e-7226-4f13-8dce-fcfc6c25f240-catalog-content\") pod \"community-operators-htqft\" (UID: \"c6a69f1e-7226-4f13-8dce-fcfc6c25f240\") " pod="openshift-marketplace/community-operators-htqft" Dec 03 00:08:52 crc kubenswrapper[4805]: E1203 00:08:52.820252 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:53.320236942 +0000 UTC m=+157.169199548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.821273 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6a69f1e-7226-4f13-8dce-fcfc6c25f240-utilities\") pod \"community-operators-htqft\" (UID: \"c6a69f1e-7226-4f13-8dce-fcfc6c25f240\") " pod="openshift-marketplace/community-operators-htqft" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.833029 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-nlnzd" podStartSLOduration=12.833008374 podStartE2EDuration="12.833008374s" podCreationTimestamp="2025-12-03 00:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:52.788448349 +0000 UTC m=+156.637410965" watchObservedRunningTime="2025-12-03 00:08:52.833008374 +0000 UTC m=+156.681970980" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.842031 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h9bzd"] Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.843310 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9bzd" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.859773 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfc64\" (UniqueName: \"kubernetes.io/projected/c6a69f1e-7226-4f13-8dce-fcfc6c25f240-kube-api-access-xfc64\") pod \"community-operators-htqft\" (UID: \"c6a69f1e-7226-4f13-8dce-fcfc6c25f240\") " pod="openshift-marketplace/community-operators-htqft" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.862364 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtztk" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.878260 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h9bzd"] Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.904554 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-46lxq" podStartSLOduration=137.90453467 podStartE2EDuration="2m17.90453467s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:52.903923065 +0000 UTC m=+156.752885671" watchObservedRunningTime="2025-12-03 00:08:52.90453467 +0000 UTC m=+156.753497276" Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.921937 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:52 crc kubenswrapper[4805]: E1203 00:08:52.924974 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:53.424931036 +0000 UTC m=+157.273893672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:52 crc kubenswrapper[4805]: I1203 00:08:52.993168 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6gm8g"] Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.003421 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6gm8g" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.019172 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-htqft" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.028595 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a653b1e4-a669-4c68-abdb-99686a4b39eb-utilities\") pod \"certified-operators-h9bzd\" (UID: \"a653b1e4-a669-4c68-abdb-99686a4b39eb\") " pod="openshift-marketplace/certified-operators-h9bzd" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.028649 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dlzq\" (UniqueName: \"kubernetes.io/projected/beeb713a-2089-47a0-bda3-e51a217f0f5e-kube-api-access-6dlzq\") pod \"community-operators-6gm8g\" (UID: \"beeb713a-2089-47a0-bda3-e51a217f0f5e\") " pod="openshift-marketplace/community-operators-6gm8g" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.028687 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beeb713a-2089-47a0-bda3-e51a217f0f5e-utilities\") pod \"community-operators-6gm8g\" (UID: \"beeb713a-2089-47a0-bda3-e51a217f0f5e\") " pod="openshift-marketplace/community-operators-6gm8g" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.028735 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.028781 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dxvr\" (UniqueName: \"kubernetes.io/projected/a653b1e4-a669-4c68-abdb-99686a4b39eb-kube-api-access-7dxvr\") pod \"certified-operators-h9bzd\" (UID: \"a653b1e4-a669-4c68-abdb-99686a4b39eb\") " pod="openshift-marketplace/certified-operators-h9bzd" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.029039 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beeb713a-2089-47a0-bda3-e51a217f0f5e-catalog-content\") pod \"community-operators-6gm8g\" (UID: \"beeb713a-2089-47a0-bda3-e51a217f0f5e\") " pod="openshift-marketplace/community-operators-6gm8g" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.029122 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a653b1e4-a669-4c68-abdb-99686a4b39eb-catalog-content\") pod \"certified-operators-h9bzd\" (UID: \"a653b1e4-a669-4c68-abdb-99686a4b39eb\") " pod="openshift-marketplace/certified-operators-h9bzd" Dec 03 00:08:53 crc kubenswrapper[4805]: E1203 00:08:53.029162 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:53.529145187 +0000 UTC m=+157.378107873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.028183 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6gm8g"] Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.107885 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" podStartSLOduration=138.107852825 podStartE2EDuration="2m18.107852825s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:53.093735297 +0000 UTC m=+156.942697903" watchObservedRunningTime="2025-12-03 00:08:53.107852825 +0000 UTC m=+156.956815431" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.130725 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:53 crc kubenswrapper[4805]: E1203 00:08:53.130846 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:53.630817934 +0000 UTC m=+157.479780540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.131136 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.131217 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.135902 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a653b1e4-a669-4c68-abdb-99686a4b39eb-utilities\") pod \"certified-operators-h9bzd\" (UID: \"a653b1e4-a669-4c68-abdb-99686a4b39eb\") " pod="openshift-marketplace/certified-operators-h9bzd" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.136268 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dlzq\" (UniqueName: \"kubernetes.io/projected/beeb713a-2089-47a0-bda3-e51a217f0f5e-kube-api-access-6dlzq\") pod \"community-operators-6gm8g\" (UID: \"beeb713a-2089-47a0-bda3-e51a217f0f5e\") " pod="openshift-marketplace/community-operators-6gm8g" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.136311 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beeb713a-2089-47a0-bda3-e51a217f0f5e-utilities\") pod \"community-operators-6gm8g\" (UID: \"beeb713a-2089-47a0-bda3-e51a217f0f5e\") " pod="openshift-marketplace/community-operators-6gm8g" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.136350 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.136415 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dxvr\" (UniqueName: \"kubernetes.io/projected/a653b1e4-a669-4c68-abdb-99686a4b39eb-kube-api-access-7dxvr\") pod \"certified-operators-h9bzd\" (UID: \"a653b1e4-a669-4c68-abdb-99686a4b39eb\") " pod="openshift-marketplace/certified-operators-h9bzd" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.136468 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beeb713a-2089-47a0-bda3-e51a217f0f5e-catalog-content\") pod \"community-operators-6gm8g\" (UID: \"beeb713a-2089-47a0-bda3-e51a217f0f5e\") " pod="openshift-marketplace/community-operators-6gm8g" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.136498 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a653b1e4-a669-4c68-abdb-99686a4b39eb-catalog-content\") pod \"certified-operators-h9bzd\" (UID: \"a653b1e4-a669-4c68-abdb-99686a4b39eb\") " pod="openshift-marketplace/certified-operators-h9bzd" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.138145 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beeb713a-2089-47a0-bda3-e51a217f0f5e-utilities\") pod \"community-operators-6gm8g\" (UID: \"beeb713a-2089-47a0-bda3-e51a217f0f5e\") " pod="openshift-marketplace/community-operators-6gm8g" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.138429 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a653b1e4-a669-4c68-abdb-99686a4b39eb-utilities\") pod \"certified-operators-h9bzd\" (UID: \"a653b1e4-a669-4c68-abdb-99686a4b39eb\") " pod="openshift-marketplace/certified-operators-h9bzd" Dec 03 00:08:53 crc kubenswrapper[4805]: E1203 00:08:53.138640 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:53.638398205 +0000 UTC m=+157.487360811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.138754 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a653b1e4-a669-4c68-abdb-99686a4b39eb-catalog-content\") pod \"certified-operators-h9bzd\" (UID: \"a653b1e4-a669-4c68-abdb-99686a4b39eb\") " pod="openshift-marketplace/certified-operators-h9bzd" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.139052 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beeb713a-2089-47a0-bda3-e51a217f0f5e-catalog-content\") pod \"community-operators-6gm8g\" (UID: \"beeb713a-2089-47a0-bda3-e51a217f0f5e\") " pod="openshift-marketplace/community-operators-6gm8g" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.147286 4805 patch_prober.go:28] interesting pod/apiserver-76f77b778f-d6jmb container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.147379 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" podUID="efd736f9-2ca3-40e9-b51a-25f95ff4529c" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.202649 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dxvr\" (UniqueName: \"kubernetes.io/projected/a653b1e4-a669-4c68-abdb-99686a4b39eb-kube-api-access-7dxvr\") pod \"certified-operators-h9bzd\" (UID: \"a653b1e4-a669-4c68-abdb-99686a4b39eb\") " pod="openshift-marketplace/certified-operators-h9bzd" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.215679 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9bzd" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.226504 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dlzq\" (UniqueName: \"kubernetes.io/projected/beeb713a-2089-47a0-bda3-e51a217f0f5e-kube-api-access-6dlzq\") pod \"community-operators-6gm8g\" (UID: \"beeb713a-2089-47a0-bda3-e51a217f0f5e\") " pod="openshift-marketplace/community-operators-6gm8g" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.299130 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:53 crc kubenswrapper[4805]: E1203 00:08:53.299392 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:53.79936708 +0000 UTC m=+157.648329676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.313044 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.469834 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:53 crc kubenswrapper[4805]: E1203 00:08:53.470498 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:53.970477391 +0000 UTC m=+157.819439997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.484095 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6gm8g" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.498638 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.498725 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.512967 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.513018 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.518258 4805 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-wfzwm container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.518360 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" podUID="742c5936-25e8-4b3f-82ec-e9a0125b855c" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.521902 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:53 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:08:53 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:08:53 crc kubenswrapper[4805]: healthz check failed Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.521993 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.575680 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:53 crc kubenswrapper[4805]: E1203 00:08:53.577114 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:54.077078923 +0000 UTC m=+157.926041529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.603391 4805 patch_prober.go:28] interesting pod/console-f9d7485db-fb2qm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.603461 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fb2qm" podUID="9594ceca-a9f4-497a-876c-845411320228" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.710400 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:53 crc kubenswrapper[4805]: E1203 00:08:53.711074 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:54.211058545 +0000 UTC m=+158.060021141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.831079 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:53 crc kubenswrapper[4805]: E1203 00:08:53.831939 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:54.331888707 +0000 UTC m=+158.180851313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.832726 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:53 crc kubenswrapper[4805]: E1203 00:08:53.833652 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:54.333635321 +0000 UTC m=+158.182597927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:53 crc kubenswrapper[4805]: I1203 00:08:53.969135 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:53 crc kubenswrapper[4805]: E1203 00:08:53.969613 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:54.469586734 +0000 UTC m=+158.318549330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:54 crc kubenswrapper[4805]: I1203 00:08:54.036951 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb48s" Dec 03 00:08:54 crc kubenswrapper[4805]: I1203 00:08:54.178545 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:54 crc kubenswrapper[4805]: E1203 00:08:54.179008 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:54.678989251 +0000 UTC m=+158.527951857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:54 crc kubenswrapper[4805]: I1203 00:08:54.211275 4805 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9rm6z container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 03 00:08:54 crc kubenswrapper[4805]: I1203 00:08:54.211333 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" podUID="61cfa86d-9594-400f-991c-4819838ee49d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 03 00:08:54 crc kubenswrapper[4805]: I1203 00:08:54.212674 4805 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9rm6z container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 03 00:08:54 crc kubenswrapper[4805]: I1203 00:08:54.212708 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" podUID="61cfa86d-9594-400f-991c-4819838ee49d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 03 00:08:54 crc kubenswrapper[4805]: I1203 00:08:54.342626 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:54 crc kubenswrapper[4805]: E1203 00:08:54.342751 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:54.842724136 +0000 UTC m=+158.691686752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:54 crc kubenswrapper[4805]: I1203 00:08:54.343093 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:54 crc kubenswrapper[4805]: E1203 00:08:54.343481 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:54.843471644 +0000 UTC m=+158.692434250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:54 crc kubenswrapper[4805]: I1203 00:08:54.449390 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:54 crc kubenswrapper[4805]: E1203 00:08:54.449799 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:54.949774528 +0000 UTC m=+158.798737134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:54 crc kubenswrapper[4805]: I1203 00:08:54.509024 4805 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-995bc container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": context deadline exceeded" start-of-body= Dec 03 00:08:54 crc kubenswrapper[4805]: I1203 00:08:54.509103 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" podUID="adbdf2c1-efd2-4036-b9be-e32b0ca196db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": context deadline exceeded" Dec 03 00:08:54 crc kubenswrapper[4805]: I1203 00:08:54.509176 4805 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-995bc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 00:08:54 crc kubenswrapper[4805]: I1203 00:08:54.509193 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" podUID="adbdf2c1-efd2-4036-b9be-e32b0ca196db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 00:08:54 crc kubenswrapper[4805]: I1203 00:08:54.529334 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:54 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:08:54 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:08:54 crc kubenswrapper[4805]: healthz check failed Dec 03 00:08:54 crc kubenswrapper[4805]: I1203 00:08:54.529696 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:54 crc kubenswrapper[4805]: I1203 00:08:54.601074 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:54 crc kubenswrapper[4805]: E1203 00:08:54.601573 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:55.10155835 +0000 UTC m=+158.950520956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:54 crc kubenswrapper[4805]: I1203 00:08:54.690180 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a3149db2-454f-4e09-839a-95d7bd8d9ca6","Type":"ContainerStarted","Data":"45858db17f95b33f0cf5193d3a520b2443bdc456919da1e3f4776342903b41b8"} Dec 03 00:08:54 crc kubenswrapper[4805]: I1203 00:08:54.735638 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:54 crc kubenswrapper[4805]: E1203 00:08:54.737305 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:55.237280878 +0000 UTC m=+159.086243484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:54 crc kubenswrapper[4805]: I1203 00:08:54.838025 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:54 crc kubenswrapper[4805]: E1203 00:08:54.838553 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:55.338534845 +0000 UTC m=+159.187497451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:54 crc kubenswrapper[4805]: I1203 00:08:54.942964 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:54 crc kubenswrapper[4805]: E1203 00:08:54.943310 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:55.443283809 +0000 UTC m=+159.292246415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.096275 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:55 crc kubenswrapper[4805]: E1203 00:08:55.096749 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:55.596733264 +0000 UTC m=+159.445695870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.224958 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.225907 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.288140 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.292158 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.298089 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.300841 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.301030 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e53ce0c7-3553-44b0-8c98-55cc407f7008-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e53ce0c7-3553-44b0-8c98-55cc407f7008\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.301124 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e53ce0c7-3553-44b0-8c98-55cc407f7008-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e53ce0c7-3553-44b0-8c98-55cc407f7008\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 00:08:55 crc kubenswrapper[4805]: E1203 00:08:55.301240 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:55.801219257 +0000 UTC m=+159.650181863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.415733 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e53ce0c7-3553-44b0-8c98-55cc407f7008-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e53ce0c7-3553-44b0-8c98-55cc407f7008\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.415798 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.415817 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e53ce0c7-3553-44b0-8c98-55cc407f7008-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e53ce0c7-3553-44b0-8c98-55cc407f7008\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.415887 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e53ce0c7-3553-44b0-8c98-55cc407f7008-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e53ce0c7-3553-44b0-8c98-55cc407f7008\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 00:08:55 crc kubenswrapper[4805]: E1203 00:08:55.416511 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:55.916499488 +0000 UTC m=+159.765462094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.516931 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:55 crc kubenswrapper[4805]: E1203 00:08:55.517409 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:56.017385975 +0000 UTC m=+159.866348581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.571589 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9hlqz"] Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.572075 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:55 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:08:55 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:08:55 crc kubenswrapper[4805]: healthz check failed Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.572115 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.573341 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9hlqz" Dec 03 00:08:55 crc kubenswrapper[4805]: E1203 00:08:55.620513 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:56.120498139 +0000 UTC m=+159.969460745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.620552 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.724906 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:55 crc kubenswrapper[4805]: E1203 00:08:55.749963 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:56.249933998 +0000 UTC m=+160.098896604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.759086 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" event={"ID":"7c71a167-0d46-4202-b2b0-8e7eb6a0d932","Type":"ContainerStarted","Data":"e473b49bd3fe83bbad72361957cd73218ea36607925cb014bb9a6cf365b89633"} Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.764681 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.765263 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/788046c6-ddf0-4f22-bc41-260efe363420-utilities\") pod \"redhat-marketplace-9hlqz\" (UID: \"788046c6-ddf0-4f22-bc41-260efe363420\") " pod="openshift-marketplace/redhat-marketplace-9hlqz" Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.765368 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.765409 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/788046c6-ddf0-4f22-bc41-260efe363420-catalog-content\") pod \"redhat-marketplace-9hlqz\" (UID: \"788046c6-ddf0-4f22-bc41-260efe363420\") " pod="openshift-marketplace/redhat-marketplace-9hlqz" Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.765461 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql9tl\" (UniqueName: \"kubernetes.io/projected/788046c6-ddf0-4f22-bc41-260efe363420-kube-api-access-ql9tl\") pod \"redhat-marketplace-9hlqz\" (UID: \"788046c6-ddf0-4f22-bc41-260efe363420\") " pod="openshift-marketplace/redhat-marketplace-9hlqz" Dec 03 00:08:55 crc kubenswrapper[4805]: E1203 00:08:55.765901 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:56.265886291 +0000 UTC m=+160.114848897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.786249 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a3149db2-454f-4e09-839a-95d7bd8d9ca6","Type":"ContainerStarted","Data":"beeffd310b81864bd432179e0f09d6bded2b8908c03665f1df35ca287c66ff5b"} Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.807015 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r647s"] Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.810318 4805 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-46lxq container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.810376 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-46lxq" podUID="9a7d00a4-3b99-44ea-b608-376ac0866cf2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.812399 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r647s" Dec 03 00:08:55 crc kubenswrapper[4805]: W1203 00:08:55.866172 4805 reflector.go:561] object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh": failed to list *v1.Secret: secrets "redhat-operators-dockercfg-ct8rh" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Dec 03 00:08:55 crc kubenswrapper[4805]: E1203 00:08:55.866250 4805 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-ct8rh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"redhat-operators-dockercfg-ct8rh\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.866933 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.867948 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/788046c6-ddf0-4f22-bc41-260efe363420-catalog-content\") pod \"redhat-marketplace-9hlqz\" (UID: \"788046c6-ddf0-4f22-bc41-260efe363420\") " pod="openshift-marketplace/redhat-marketplace-9hlqz" Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.867992 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql9tl\" (UniqueName: \"kubernetes.io/projected/788046c6-ddf0-4f22-bc41-260efe363420-kube-api-access-ql9tl\") pod \"redhat-marketplace-9hlqz\" (UID: \"788046c6-ddf0-4f22-bc41-260efe363420\") " pod="openshift-marketplace/redhat-marketplace-9hlqz" Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.868081 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/788046c6-ddf0-4f22-bc41-260efe363420-utilities\") pod \"redhat-marketplace-9hlqz\" (UID: \"788046c6-ddf0-4f22-bc41-260efe363420\") " pod="openshift-marketplace/redhat-marketplace-9hlqz" Dec 03 00:08:55 crc kubenswrapper[4805]: E1203 00:08:55.868220 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:56.368181633 +0000 UTC m=+160.217144239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.868657 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/788046c6-ddf0-4f22-bc41-260efe363420-catalog-content\") pod \"redhat-marketplace-9hlqz\" (UID: \"788046c6-ddf0-4f22-bc41-260efe363420\") " pod="openshift-marketplace/redhat-marketplace-9hlqz" Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.869253 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/788046c6-ddf0-4f22-bc41-260efe363420-utilities\") pod \"redhat-marketplace-9hlqz\" (UID: \"788046c6-ddf0-4f22-bc41-260efe363420\") " pod="openshift-marketplace/redhat-marketplace-9hlqz" Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.917915 4805 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-46lxq container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.917975 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-46lxq" podUID="9a7d00a4-3b99-44ea-b608-376ac0866cf2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.918058 4805 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-46lxq container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.918072 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-46lxq" podUID="9a7d00a4-3b99-44ea-b608-376ac0866cf2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.955002 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9hlqz"] Dec 03 00:08:55 crc kubenswrapper[4805]: I1203 00:08:55.957097 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e53ce0c7-3553-44b0-8c98-55cc407f7008-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e53ce0c7-3553-44b0-8c98-55cc407f7008\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.035243 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4f086f6-ed16-47f7-a630-f480a86f4954-utilities\") pod \"redhat-operators-r647s\" (UID: \"b4f086f6-ed16-47f7-a630-f480a86f4954\") " pod="openshift-marketplace/redhat-operators-r647s" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.035322 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.035413 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scq7s\" (UniqueName: \"kubernetes.io/projected/b4f086f6-ed16-47f7-a630-f480a86f4954-kube-api-access-scq7s\") pod \"redhat-operators-r647s\" (UID: \"b4f086f6-ed16-47f7-a630-f480a86f4954\") " pod="openshift-marketplace/redhat-operators-r647s" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.035439 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4f086f6-ed16-47f7-a630-f480a86f4954-catalog-content\") pod \"redhat-operators-r647s\" (UID: \"b4f086f6-ed16-47f7-a630-f480a86f4954\") " pod="openshift-marketplace/redhat-operators-r647s" Dec 03 00:08:56 crc kubenswrapper[4805]: E1203 00:08:56.037297 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:56.537278033 +0000 UTC m=+160.386240709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.136905 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.137151 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scq7s\" (UniqueName: \"kubernetes.io/projected/b4f086f6-ed16-47f7-a630-f480a86f4954-kube-api-access-scq7s\") pod \"redhat-operators-r647s\" (UID: \"b4f086f6-ed16-47f7-a630-f480a86f4954\") " pod="openshift-marketplace/redhat-operators-r647s" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.137221 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4f086f6-ed16-47f7-a630-f480a86f4954-catalog-content\") pod \"redhat-operators-r647s\" (UID: \"b4f086f6-ed16-47f7-a630-f480a86f4954\") " pod="openshift-marketplace/redhat-operators-r647s" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.137268 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4f086f6-ed16-47f7-a630-f480a86f4954-utilities\") pod \"redhat-operators-r647s\" (UID: \"b4f086f6-ed16-47f7-a630-f480a86f4954\") " pod="openshift-marketplace/redhat-operators-r647s" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.137830 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4f086f6-ed16-47f7-a630-f480a86f4954-utilities\") pod \"redhat-operators-r647s\" (UID: \"b4f086f6-ed16-47f7-a630-f480a86f4954\") " pod="openshift-marketplace/redhat-operators-r647s" Dec 03 00:08:56 crc kubenswrapper[4805]: E1203 00:08:56.137912 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:56.637889074 +0000 UTC m=+160.486851680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.139619 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4f086f6-ed16-47f7-a630-f480a86f4954-catalog-content\") pod \"redhat-operators-r647s\" (UID: \"b4f086f6-ed16-47f7-a630-f480a86f4954\") " pod="openshift-marketplace/redhat-operators-r647s" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.150054 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql9tl\" (UniqueName: \"kubernetes.io/projected/788046c6-ddf0-4f22-bc41-260efe363420-kube-api-access-ql9tl\") pod \"redhat-marketplace-9hlqz\" (UID: \"788046c6-ddf0-4f22-bc41-260efe363420\") " pod="openshift-marketplace/redhat-marketplace-9hlqz" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.151144 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.189408 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r647s"] Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.198544 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9hlqz" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.264826 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:56 crc kubenswrapper[4805]: E1203 00:08:56.265178 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:56.765161588 +0000 UTC m=+160.614124194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.269184 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zd9dv"] Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.270671 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zd9dv" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.270744 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-26vk7"] Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.274573 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26vk7" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.317664 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zd9dv"] Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.318335 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scq7s\" (UniqueName: \"kubernetes.io/projected/b4f086f6-ed16-47f7-a630-f480a86f4954-kube-api-access-scq7s\") pod \"redhat-operators-r647s\" (UID: \"b4f086f6-ed16-47f7-a630-f480a86f4954\") " pod="openshift-marketplace/redhat-operators-r647s" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.355436 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-26vk7"] Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.366098 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.366583 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb20f5d6-0456-4fb8-8435-407ccfc9319f-catalog-content\") pod \"redhat-operators-zd9dv\" (UID: \"cb20f5d6-0456-4fb8-8435-407ccfc9319f\") " pod="openshift-marketplace/redhat-operators-zd9dv" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.366646 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb20f5d6-0456-4fb8-8435-407ccfc9319f-utilities\") pod \"redhat-operators-zd9dv\" (UID: \"cb20f5d6-0456-4fb8-8435-407ccfc9319f\") " pod="openshift-marketplace/redhat-operators-zd9dv" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.366674 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/809c4c09-4e7d-40b5-9964-7b09c2a19ea5-catalog-content\") pod \"redhat-marketplace-26vk7\" (UID: \"809c4c09-4e7d-40b5-9964-7b09c2a19ea5\") " pod="openshift-marketplace/redhat-marketplace-26vk7" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.366744 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsjz9\" (UniqueName: \"kubernetes.io/projected/cb20f5d6-0456-4fb8-8435-407ccfc9319f-kube-api-access-bsjz9\") pod \"redhat-operators-zd9dv\" (UID: \"cb20f5d6-0456-4fb8-8435-407ccfc9319f\") " pod="openshift-marketplace/redhat-operators-zd9dv" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.366818 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/809c4c09-4e7d-40b5-9964-7b09c2a19ea5-utilities\") pod \"redhat-marketplace-26vk7\" (UID: \"809c4c09-4e7d-40b5-9964-7b09c2a19ea5\") " pod="openshift-marketplace/redhat-marketplace-26vk7" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.366900 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c48zp\" (UniqueName: \"kubernetes.io/projected/809c4c09-4e7d-40b5-9964-7b09c2a19ea5-kube-api-access-c48zp\") pod \"redhat-marketplace-26vk7\" (UID: \"809c4c09-4e7d-40b5-9964-7b09c2a19ea5\") " pod="openshift-marketplace/redhat-marketplace-26vk7" Dec 03 00:08:56 crc kubenswrapper[4805]: E1203 00:08:56.367049 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:56.867010969 +0000 UTC m=+160.715973595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.399393 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-htqft"] Dec 03 00:08:56 crc kubenswrapper[4805]: W1203 00:08:56.439558 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6a69f1e_7226_4f13_8dce_fcfc6c25f240.slice/crio-ced5342dab4576833d36eef04dc90c41f7a8224b9e40afc3b40344ae4277527e WatchSource:0}: Error finding container ced5342dab4576833d36eef04dc90c41f7a8224b9e40afc3b40344ae4277527e: Status 404 returned error can't find the container with id ced5342dab4576833d36eef04dc90c41f7a8224b9e40afc3b40344ae4277527e Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.476128 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb20f5d6-0456-4fb8-8435-407ccfc9319f-catalog-content\") pod \"redhat-operators-zd9dv\" (UID: \"cb20f5d6-0456-4fb8-8435-407ccfc9319f\") " pod="openshift-marketplace/redhat-operators-zd9dv" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.476284 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb20f5d6-0456-4fb8-8435-407ccfc9319f-utilities\") pod \"redhat-operators-zd9dv\" (UID: \"cb20f5d6-0456-4fb8-8435-407ccfc9319f\") " pod="openshift-marketplace/redhat-operators-zd9dv" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.476386 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/809c4c09-4e7d-40b5-9964-7b09c2a19ea5-catalog-content\") pod \"redhat-marketplace-26vk7\" (UID: \"809c4c09-4e7d-40b5-9964-7b09c2a19ea5\") " pod="openshift-marketplace/redhat-marketplace-26vk7" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.476510 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsjz9\" (UniqueName: \"kubernetes.io/projected/cb20f5d6-0456-4fb8-8435-407ccfc9319f-kube-api-access-bsjz9\") pod \"redhat-operators-zd9dv\" (UID: \"cb20f5d6-0456-4fb8-8435-407ccfc9319f\") " pod="openshift-marketplace/redhat-operators-zd9dv" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.476619 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.476708 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/809c4c09-4e7d-40b5-9964-7b09c2a19ea5-utilities\") pod \"redhat-marketplace-26vk7\" (UID: \"809c4c09-4e7d-40b5-9964-7b09c2a19ea5\") " pod="openshift-marketplace/redhat-marketplace-26vk7" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.476856 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c48zp\" (UniqueName: \"kubernetes.io/projected/809c4c09-4e7d-40b5-9964-7b09c2a19ea5-kube-api-access-c48zp\") pod \"redhat-marketplace-26vk7\" (UID: \"809c4c09-4e7d-40b5-9964-7b09c2a19ea5\") " pod="openshift-marketplace/redhat-marketplace-26vk7" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.477862 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb20f5d6-0456-4fb8-8435-407ccfc9319f-catalog-content\") pod \"redhat-operators-zd9dv\" (UID: \"cb20f5d6-0456-4fb8-8435-407ccfc9319f\") " pod="openshift-marketplace/redhat-operators-zd9dv" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.478283 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb20f5d6-0456-4fb8-8435-407ccfc9319f-utilities\") pod \"redhat-operators-zd9dv\" (UID: \"cb20f5d6-0456-4fb8-8435-407ccfc9319f\") " pod="openshift-marketplace/redhat-operators-zd9dv" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.478778 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/809c4c09-4e7d-40b5-9964-7b09c2a19ea5-catalog-content\") pod \"redhat-marketplace-26vk7\" (UID: \"809c4c09-4e7d-40b5-9964-7b09c2a19ea5\") " pod="openshift-marketplace/redhat-marketplace-26vk7" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.480361 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/809c4c09-4e7d-40b5-9964-7b09c2a19ea5-utilities\") pod \"redhat-marketplace-26vk7\" (UID: \"809c4c09-4e7d-40b5-9964-7b09c2a19ea5\") " pod="openshift-marketplace/redhat-marketplace-26vk7" Dec 03 00:08:56 crc kubenswrapper[4805]: E1203 00:08:56.491504 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:56.991475682 +0000 UTC m=+160.840438288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.503446 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:56 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:08:56 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:08:56 crc kubenswrapper[4805]: healthz check failed Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.503529 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.530265 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=6.530243031 podStartE2EDuration="6.530243031s" podCreationTimestamp="2025-12-03 00:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:56.494830667 +0000 UTC m=+160.343793283" watchObservedRunningTime="2025-12-03 00:08:56.530243031 +0000 UTC m=+160.379205637" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.573454 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vtztk"] Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.580638 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:56 crc kubenswrapper[4805]: E1203 00:08:56.581037 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 00:08:57.081010942 +0000 UTC m=+160.929973548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:56 crc kubenswrapper[4805]: W1203 00:08:56.600429 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d6f0e6f_57cf_4586_8fe7_0df5145d4f33.slice/crio-b2b8ae9ddb1a02220617f036b446434d8c4cce54531c9af731136623fdd36e6c WatchSource:0}: Error finding container b2b8ae9ddb1a02220617f036b446434d8c4cce54531c9af731136623fdd36e6c: Status 404 returned error can't find the container with id b2b8ae9ddb1a02220617f036b446434d8c4cce54531c9af731136623fdd36e6c Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.625617 4805 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.678158 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c48zp\" (UniqueName: \"kubernetes.io/projected/809c4c09-4e7d-40b5-9964-7b09c2a19ea5-kube-api-access-c48zp\") pod \"redhat-marketplace-26vk7\" (UID: \"809c4c09-4e7d-40b5-9964-7b09c2a19ea5\") " pod="openshift-marketplace/redhat-marketplace-26vk7" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.680540 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsjz9\" (UniqueName: \"kubernetes.io/projected/cb20f5d6-0456-4fb8-8435-407ccfc9319f-kube-api-access-bsjz9\") pod \"redhat-operators-zd9dv\" (UID: \"cb20f5d6-0456-4fb8-8435-407ccfc9319f\") " pod="openshift-marketplace/redhat-operators-zd9dv" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.688740 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:56 crc kubenswrapper[4805]: E1203 00:08:56.689048 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 00:08:57.18903465 +0000 UTC m=+161.037997256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kdg2s" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.715939 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.721307 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zd9dv" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.727869 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r647s" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.748027 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26vk7" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.751972 4805 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-03T00:08:56.6256453Z","Handler":null,"Name":""} Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.755923 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6gm8g"] Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.792820 4805 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.792868 4805 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.832313 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.870908 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h9bzd"] Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.874920 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.925868 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" event={"ID":"7c71a167-0d46-4202-b2b0-8e7eb6a0d932","Type":"ContainerStarted","Data":"517465fabca25e4c3ed308b21876830072386246dde22ac59583092c623497aa"} Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.938751 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:56 crc kubenswrapper[4805]: I1203 00:08:56.958191 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6gm8g" event={"ID":"beeb713a-2089-47a0-bda3-e51a217f0f5e","Type":"ContainerStarted","Data":"8a255ad769db4a049926227c46f8a7849571e286a7d3e385e0bce44064490928"} Dec 03 00:08:57 crc kubenswrapper[4805]: I1203 00:08:57.023775 4805 generic.go:334] "Generic (PLEG): container finished" podID="a3149db2-454f-4e09-839a-95d7bd8d9ca6" containerID="beeffd310b81864bd432179e0f09d6bded2b8908c03665f1df35ca287c66ff5b" exitCode=0 Dec 03 00:08:57 crc kubenswrapper[4805]: I1203 00:08:57.024241 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a3149db2-454f-4e09-839a-95d7bd8d9ca6","Type":"ContainerDied","Data":"beeffd310b81864bd432179e0f09d6bded2b8908c03665f1df35ca287c66ff5b"} Dec 03 00:08:57 crc kubenswrapper[4805]: I1203 00:08:57.033235 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 03 00:08:57 crc kubenswrapper[4805]: I1203 00:08:57.033286 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:57 crc kubenswrapper[4805]: I1203 00:08:57.052029 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtztk" event={"ID":"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33","Type":"ContainerStarted","Data":"b2b8ae9ddb1a02220617f036b446434d8c4cce54531c9af731136623fdd36e6c"} Dec 03 00:08:57 crc kubenswrapper[4805]: I1203 00:08:57.117800 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htqft" event={"ID":"c6a69f1e-7226-4f13-8dce-fcfc6c25f240","Type":"ContainerStarted","Data":"ced5342dab4576833d36eef04dc90c41f7a8224b9e40afc3b40344ae4277527e"} Dec 03 00:08:57 crc kubenswrapper[4805]: I1203 00:08:57.435522 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 03 00:08:57 crc kubenswrapper[4805]: I1203 00:08:57.537038 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:57 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:08:57 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:08:57 crc kubenswrapper[4805]: healthz check failed Dec 03 00:08:57 crc kubenswrapper[4805]: I1203 00:08:57.537093 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:57 crc kubenswrapper[4805]: I1203 00:08:57.664706 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kdg2s\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:57 crc kubenswrapper[4805]: I1203 00:08:57.770092 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs\") pod \"network-metrics-daemon-q4nqx\" (UID: \"3829c74e-7807-4b31-9b2a-2482ec95a235\") " pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:08:57 crc kubenswrapper[4805]: I1203 00:08:57.841941 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3829c74e-7807-4b31-9b2a-2482ec95a235-metrics-certs\") pod \"network-metrics-daemon-q4nqx\" (UID: \"3829c74e-7807-4b31-9b2a-2482ec95a235\") " pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:08:57 crc kubenswrapper[4805]: I1203 00:08:57.864322 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9hlqz"] Dec 03 00:08:57 crc kubenswrapper[4805]: I1203 00:08:57.906823 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:08:57 crc kubenswrapper[4805]: I1203 00:08:57.964821 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q4nqx" Dec 03 00:08:58 crc kubenswrapper[4805]: I1203 00:08:58.164087 4805 generic.go:334] "Generic (PLEG): container finished" podID="beeb713a-2089-47a0-bda3-e51a217f0f5e" containerID="32a2bea7433d07a4dafed3cf18b6dae7660de791284b8ddb8409fcc52ee302b4" exitCode=0 Dec 03 00:08:58 crc kubenswrapper[4805]: I1203 00:08:58.164473 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6gm8g" event={"ID":"beeb713a-2089-47a0-bda3-e51a217f0f5e","Type":"ContainerDied","Data":"32a2bea7433d07a4dafed3cf18b6dae7660de791284b8ddb8409fcc52ee302b4"} Dec 03 00:08:58 crc kubenswrapper[4805]: I1203 00:08:58.180302 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e53ce0c7-3553-44b0-8c98-55cc407f7008","Type":"ContainerStarted","Data":"9672c74106834488887371f3abd8dc008808dfebfb32e7c8c71e350dce845de8"} Dec 03 00:08:58 crc kubenswrapper[4805]: I1203 00:08:58.219037 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtztk" event={"ID":"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33","Type":"ContainerStarted","Data":"f3c92134d9c2f0972f451a2f244f1186c987e245a163eb14b1f7031e6780982d"} Dec 03 00:08:58 crc kubenswrapper[4805]: I1203 00:08:58.271094 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9bzd" event={"ID":"a653b1e4-a669-4c68-abdb-99686a4b39eb","Type":"ContainerStarted","Data":"3f0eb2c4765475c3bb09d273d655e16344433b306a14b54c1ba1881226b9b9fb"} Dec 03 00:08:58 crc kubenswrapper[4805]: I1203 00:08:58.271156 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9bzd" event={"ID":"a653b1e4-a669-4c68-abdb-99686a4b39eb","Type":"ContainerStarted","Data":"df9411d5e4afcbd1afdc16c23769a839135e114d7f22a21529a995c5126aede0"} Dec 03 00:08:58 crc kubenswrapper[4805]: I1203 00:08:58.290511 4805 generic.go:334] "Generic (PLEG): container finished" podID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" containerID="63ebcbed8204b54fe8f1bf842a6bb0cc0d3dcb23b02c5f544340e5455ff92873" exitCode=0 Dec 03 00:08:58 crc kubenswrapper[4805]: I1203 00:08:58.296312 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htqft" event={"ID":"c6a69f1e-7226-4f13-8dce-fcfc6c25f240","Type":"ContainerDied","Data":"63ebcbed8204b54fe8f1bf842a6bb0cc0d3dcb23b02c5f544340e5455ff92873"} Dec 03 00:08:58 crc kubenswrapper[4805]: I1203 00:08:58.485402 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 00:08:58 crc kubenswrapper[4805]: I1203 00:08:58.523949 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:58 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:08:58 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:08:58 crc kubenswrapper[4805]: healthz check failed Dec 03 00:08:58 crc kubenswrapper[4805]: I1203 00:08:58.524276 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:58 crc kubenswrapper[4805]: I1203 00:08:58.904022 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 03 00:08:58 crc kubenswrapper[4805]: I1203 00:08:58.909813 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:59 crc kubenswrapper[4805]: I1203 00:08:59.147689 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfzwm" Dec 03 00:08:59 crc kubenswrapper[4805]: I1203 00:08:59.254431 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-46lxq" Dec 03 00:08:59 crc kubenswrapper[4805]: I1203 00:08:59.264395 4805 patch_prober.go:28] interesting pod/apiserver-76f77b778f-d6jmb container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 03 00:08:59 crc kubenswrapper[4805]: [+]log ok Dec 03 00:08:59 crc kubenswrapper[4805]: [+]etcd ok Dec 03 00:08:59 crc kubenswrapper[4805]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 03 00:08:59 crc kubenswrapper[4805]: [+]poststarthook/generic-apiserver-start-informers ok Dec 03 00:08:59 crc kubenswrapper[4805]: [+]poststarthook/max-in-flight-filter ok Dec 03 00:08:59 crc kubenswrapper[4805]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 03 00:08:59 crc kubenswrapper[4805]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 03 00:08:59 crc kubenswrapper[4805]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 03 00:08:59 crc kubenswrapper[4805]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 03 00:08:59 crc kubenswrapper[4805]: [+]poststarthook/project.openshift.io-projectcache ok Dec 03 00:08:59 crc kubenswrapper[4805]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 03 00:08:59 crc kubenswrapper[4805]: [+]poststarthook/openshift.io-startinformers ok Dec 03 00:08:59 crc kubenswrapper[4805]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 03 00:08:59 crc kubenswrapper[4805]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 03 00:08:59 crc kubenswrapper[4805]: livez check failed Dec 03 00:08:59 crc kubenswrapper[4805]: I1203 00:08:59.264490 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" podUID="efd736f9-2ca3-40e9-b51a-25f95ff4529c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:59 crc kubenswrapper[4805]: I1203 00:08:59.342908 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9hlqz" event={"ID":"788046c6-ddf0-4f22-bc41-260efe363420","Type":"ContainerStarted","Data":"cc26fd9683c3c1146cec388e1370f80073b66505f66784b36f1d836806c8a8b0"} Dec 03 00:08:59 crc kubenswrapper[4805]: I1203 00:08:59.383829 4805 generic.go:334] "Generic (PLEG): container finished" podID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" containerID="f3c92134d9c2f0972f451a2f244f1186c987e245a163eb14b1f7031e6780982d" exitCode=0 Dec 03 00:08:59 crc kubenswrapper[4805]: I1203 00:08:59.384210 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtztk" event={"ID":"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33","Type":"ContainerDied","Data":"f3c92134d9c2f0972f451a2f244f1186c987e245a163eb14b1f7031e6780982d"} Dec 03 00:08:59 crc kubenswrapper[4805]: I1203 00:08:59.446711 4805 generic.go:334] "Generic (PLEG): container finished" podID="a653b1e4-a669-4c68-abdb-99686a4b39eb" containerID="3f0eb2c4765475c3bb09d273d655e16344433b306a14b54c1ba1881226b9b9fb" exitCode=0 Dec 03 00:08:59 crc kubenswrapper[4805]: I1203 00:08:59.447963 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9bzd" event={"ID":"a653b1e4-a669-4c68-abdb-99686a4b39eb","Type":"ContainerDied","Data":"3f0eb2c4765475c3bb09d273d655e16344433b306a14b54c1ba1881226b9b9fb"} Dec 03 00:08:59 crc kubenswrapper[4805]: I1203 00:08:59.478088 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r647s"] Dec 03 00:08:59 crc kubenswrapper[4805]: I1203 00:08:59.512057 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:08:59 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:08:59 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:08:59 crc kubenswrapper[4805]: healthz check failed Dec 03 00:08:59 crc kubenswrapper[4805]: I1203 00:08:59.512133 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:08:59 crc kubenswrapper[4805]: I1203 00:08:59.525250 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" event={"ID":"7c71a167-0d46-4202-b2b0-8e7eb6a0d932","Type":"ContainerStarted","Data":"d81e714b4aeacf04b13c923309e10aa58f8f95f98793b5eeaac5987545c575b8"} Dec 03 00:08:59 crc kubenswrapper[4805]: I1203 00:08:59.574023 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-ncp6l" podStartSLOduration=19.573989667 podStartE2EDuration="19.573989667s" podCreationTimestamp="2025-12-03 00:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:08:59.565836321 +0000 UTC m=+163.414798927" watchObservedRunningTime="2025-12-03 00:08:59.573989667 +0000 UTC m=+163.422952273" Dec 03 00:08:59 crc kubenswrapper[4805]: I1203 00:08:59.596052 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zd9dv"] Dec 03 00:08:59 crc kubenswrapper[4805]: I1203 00:08:59.694707 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-26vk7"] Dec 03 00:08:59 crc kubenswrapper[4805]: I1203 00:08:59.915071 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 00:08:59 crc kubenswrapper[4805]: I1203 00:08:59.953996 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kdg2s"] Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.055961 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3149db2-454f-4e09-839a-95d7bd8d9ca6-kube-api-access\") pod \"a3149db2-454f-4e09-839a-95d7bd8d9ca6\" (UID: \"a3149db2-454f-4e09-839a-95d7bd8d9ca6\") " Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.056374 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3149db2-454f-4e09-839a-95d7bd8d9ca6-kubelet-dir\") pod \"a3149db2-454f-4e09-839a-95d7bd8d9ca6\" (UID: \"a3149db2-454f-4e09-839a-95d7bd8d9ca6\") " Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.057236 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3149db2-454f-4e09-839a-95d7bd8d9ca6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a3149db2-454f-4e09-839a-95d7bd8d9ca6" (UID: "a3149db2-454f-4e09-839a-95d7bd8d9ca6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.057624 4805 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3149db2-454f-4e09-839a-95d7bd8d9ca6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.068676 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3149db2-454f-4e09-839a-95d7bd8d9ca6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a3149db2-454f-4e09-839a-95d7bd8d9ca6" (UID: "a3149db2-454f-4e09-839a-95d7bd8d9ca6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.069986 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-q4nqx"] Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.158354 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3149db2-454f-4e09-839a-95d7bd8d9ca6-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:00 crc kubenswrapper[4805]: W1203 00:09:00.236035 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3829c74e_7807_4b31_9b2a_2482ec95a235.slice/crio-01a4426ae59c5d562aac02b60ab235b89a84900262bec3b7ee04b0d9a2a1792e WatchSource:0}: Error finding container 01a4426ae59c5d562aac02b60ab235b89a84900262bec3b7ee04b0d9a2a1792e: Status 404 returned error can't find the container with id 01a4426ae59c5d562aac02b60ab235b89a84900262bec3b7ee04b0d9a2a1792e Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.514491 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:00 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:09:00 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:00 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.514902 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.636463 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" event={"ID":"3829c74e-7807-4b31-9b2a-2482ec95a235","Type":"ContainerStarted","Data":"01a4426ae59c5d562aac02b60ab235b89a84900262bec3b7ee04b0d9a2a1792e"} Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.647002 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" event={"ID":"be50df79-cb92-4c40-81ea-a90cee61b549","Type":"ContainerStarted","Data":"a529c89359277f776733285244226619b151a9fc20740447044309630f905eea"} Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.647054 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" event={"ID":"be50df79-cb92-4c40-81ea-a90cee61b549","Type":"ContainerStarted","Data":"9b4829b6438226519b8869926ced3f0afc6362abfb00273c5978a9692f4374e9"} Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.648139 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.674279 4805 generic.go:334] "Generic (PLEG): container finished" podID="e53ce0c7-3553-44b0-8c98-55cc407f7008" containerID="3915669d9ce40c0f11dd8f552b98ccbebd319f9f3e465f3e23cbe4ee43cfd1e3" exitCode=0 Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.674426 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e53ce0c7-3553-44b0-8c98-55cc407f7008","Type":"ContainerDied","Data":"3915669d9ce40c0f11dd8f552b98ccbebd319f9f3e465f3e23cbe4ee43cfd1e3"} Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.716725 4805 generic.go:334] "Generic (PLEG): container finished" podID="809c4c09-4e7d-40b5-9964-7b09c2a19ea5" containerID="e3e9f5cb457bfeae2bfc4dea5fb9ea30d02aaad5a6c16cfe945d9a9b44471988" exitCode=0 Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.717155 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26vk7" event={"ID":"809c4c09-4e7d-40b5-9964-7b09c2a19ea5","Type":"ContainerDied","Data":"e3e9f5cb457bfeae2bfc4dea5fb9ea30d02aaad5a6c16cfe945d9a9b44471988"} Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.717522 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26vk7" event={"ID":"809c4c09-4e7d-40b5-9964-7b09c2a19ea5","Type":"ContainerStarted","Data":"3efe5e003ab05fe53906d263fcb7e3a6634aa612805f67864820359ac916ff14"} Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.749087 4805 generic.go:334] "Generic (PLEG): container finished" podID="fd1a9d56-21a3-450e-b9af-fc132ee10466" containerID="268ac17880cd826d20dfae46c476da988e604b8fd722445aebac63eb4a9655f8" exitCode=0 Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.749260 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq" event={"ID":"fd1a9d56-21a3-450e-b9af-fc132ee10466","Type":"ContainerDied","Data":"268ac17880cd826d20dfae46c476da988e604b8fd722445aebac63eb4a9655f8"} Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.770350 4805 generic.go:334] "Generic (PLEG): container finished" podID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" containerID="3438bacc74cd03ca2abb42cd44281714addefef15961f521135ae87c2e4b8084" exitCode=0 Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.770469 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zd9dv" event={"ID":"cb20f5d6-0456-4fb8-8435-407ccfc9319f","Type":"ContainerDied","Data":"3438bacc74cd03ca2abb42cd44281714addefef15961f521135ae87c2e4b8084"} Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.770499 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zd9dv" event={"ID":"cb20f5d6-0456-4fb8-8435-407ccfc9319f","Type":"ContainerStarted","Data":"89b91d3813bdd10e05aaf2e79e85e044fa4e047c19cce7a2ad00e13c378c907d"} Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.779542 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" podStartSLOduration=145.779515636 podStartE2EDuration="2m25.779515636s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:09:00.748238216 +0000 UTC m=+164.597200842" watchObservedRunningTime="2025-12-03 00:09:00.779515636 +0000 UTC m=+164.628478232" Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.824867 4805 generic.go:334] "Generic (PLEG): container finished" podID="788046c6-ddf0-4f22-bc41-260efe363420" containerID="ea3e2afc7993188710d7236da5b2e4eb361ee7b37abd505560fe0f2e825379cf" exitCode=0 Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.824965 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9hlqz" event={"ID":"788046c6-ddf0-4f22-bc41-260efe363420","Type":"ContainerDied","Data":"ea3e2afc7993188710d7236da5b2e4eb361ee7b37abd505560fe0f2e825379cf"} Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.832448 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a3149db2-454f-4e09-839a-95d7bd8d9ca6","Type":"ContainerDied","Data":"45858db17f95b33f0cf5193d3a520b2443bdc456919da1e3f4776342903b41b8"} Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.832910 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45858db17f95b33f0cf5193d3a520b2443bdc456919da1e3f4776342903b41b8" Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.833032 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.874857 4805 generic.go:334] "Generic (PLEG): container finished" podID="b4f086f6-ed16-47f7-a630-f480a86f4954" containerID="217b9d9d0a4c028d10e4b0340685fb0495175d1d69790ba1cff4ad3796c27fc5" exitCode=0 Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.880321 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r647s" event={"ID":"b4f086f6-ed16-47f7-a630-f480a86f4954","Type":"ContainerDied","Data":"217b9d9d0a4c028d10e4b0340685fb0495175d1d69790ba1cff4ad3796c27fc5"} Dec 03 00:09:00 crc kubenswrapper[4805]: I1203 00:09:00.880410 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r647s" event={"ID":"b4f086f6-ed16-47f7-a630-f480a86f4954","Type":"ContainerStarted","Data":"bb5bfad7070a40d0c0b79aa3c419ed5d7597218312105a8be35284bd80070783"} Dec 03 00:09:01 crc kubenswrapper[4805]: I1203 00:09:01.497674 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:01 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:09:01 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:01 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:01 crc kubenswrapper[4805]: I1203 00:09:01.497742 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:01 crc kubenswrapper[4805]: I1203 00:09:01.916752 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" event={"ID":"3829c74e-7807-4b31-9b2a-2482ec95a235","Type":"ContainerStarted","Data":"a34e4f6890b77e898c27dd29f9afcfdac981959a9bef4ce1334d0c45bad66ca5"} Dec 03 00:09:01 crc kubenswrapper[4805]: I1203 00:09:01.945251 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-nlnzd" Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.461013 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqrgj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.461664 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hqrgj" podUID="57c96cff-592a-47c8-a038-6bb23bac6aa5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.461072 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqrgj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.462069 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hqrgj" podUID="57c96cff-592a-47c8-a038-6bb23bac6aa5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.506762 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:02 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:09:02 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:02 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.506839 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.656528 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.711764 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e53ce0c7-3553-44b0-8c98-55cc407f7008-kubelet-dir\") pod \"e53ce0c7-3553-44b0-8c98-55cc407f7008\" (UID: \"e53ce0c7-3553-44b0-8c98-55cc407f7008\") " Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.711926 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e53ce0c7-3553-44b0-8c98-55cc407f7008-kube-api-access\") pod \"e53ce0c7-3553-44b0-8c98-55cc407f7008\" (UID: \"e53ce0c7-3553-44b0-8c98-55cc407f7008\") " Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.712474 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e53ce0c7-3553-44b0-8c98-55cc407f7008-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e53ce0c7-3553-44b0-8c98-55cc407f7008" (UID: "e53ce0c7-3553-44b0-8c98-55cc407f7008"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.722091 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e53ce0c7-3553-44b0-8c98-55cc407f7008-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e53ce0c7-3553-44b0-8c98-55cc407f7008" (UID: "e53ce0c7-3553-44b0-8c98-55cc407f7008"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.741707 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq" Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.812851 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd1a9d56-21a3-450e-b9af-fc132ee10466-secret-volume\") pod \"fd1a9d56-21a3-450e-b9af-fc132ee10466\" (UID: \"fd1a9d56-21a3-450e-b9af-fc132ee10466\") " Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.812918 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd1a9d56-21a3-450e-b9af-fc132ee10466-config-volume\") pod \"fd1a9d56-21a3-450e-b9af-fc132ee10466\" (UID: \"fd1a9d56-21a3-450e-b9af-fc132ee10466\") " Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.812948 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9trm\" (UniqueName: \"kubernetes.io/projected/fd1a9d56-21a3-450e-b9af-fc132ee10466-kube-api-access-n9trm\") pod \"fd1a9d56-21a3-450e-b9af-fc132ee10466\" (UID: \"fd1a9d56-21a3-450e-b9af-fc132ee10466\") " Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.813160 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e53ce0c7-3553-44b0-8c98-55cc407f7008-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.813173 4805 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e53ce0c7-3553-44b0-8c98-55cc407f7008-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.814711 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd1a9d56-21a3-450e-b9af-fc132ee10466-config-volume" (OuterVolumeSpecName: "config-volume") pod "fd1a9d56-21a3-450e-b9af-fc132ee10466" (UID: "fd1a9d56-21a3-450e-b9af-fc132ee10466"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.825307 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd1a9d56-21a3-450e-b9af-fc132ee10466-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fd1a9d56-21a3-450e-b9af-fc132ee10466" (UID: "fd1a9d56-21a3-450e-b9af-fc132ee10466"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.825538 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd1a9d56-21a3-450e-b9af-fc132ee10466-kube-api-access-n9trm" (OuterVolumeSpecName: "kube-api-access-n9trm") pod "fd1a9d56-21a3-450e-b9af-fc132ee10466" (UID: "fd1a9d56-21a3-450e-b9af-fc132ee10466"). InnerVolumeSpecName "kube-api-access-n9trm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.917624 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd1a9d56-21a3-450e-b9af-fc132ee10466-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.917656 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd1a9d56-21a3-450e-b9af-fc132ee10466-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.917668 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9trm\" (UniqueName: \"kubernetes.io/projected/fd1a9d56-21a3-450e-b9af-fc132ee10466-kube-api-access-n9trm\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.999862 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq" event={"ID":"fd1a9d56-21a3-450e-b9af-fc132ee10466","Type":"ContainerDied","Data":"603ed9fa26684ec3f79384a671e62394d70d7c2141d65dbb1b0e58279ca48313"} Dec 03 00:09:02 crc kubenswrapper[4805]: I1203 00:09:02.999911 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="603ed9fa26684ec3f79384a671e62394d70d7c2141d65dbb1b0e58279ca48313" Dec 03 00:09:03 crc kubenswrapper[4805]: I1203 00:09:02.999976 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq" Dec 03 00:09:03 crc kubenswrapper[4805]: I1203 00:09:03.009328 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q4nqx" event={"ID":"3829c74e-7807-4b31-9b2a-2482ec95a235","Type":"ContainerStarted","Data":"41a479a170282d6317c4a7d2361e8dabc55a8fb55ba7ee84280ff009a635bf52"} Dec 03 00:09:03 crc kubenswrapper[4805]: I1203 00:09:03.021944 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 00:09:03 crc kubenswrapper[4805]: I1203 00:09:03.022821 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e53ce0c7-3553-44b0-8c98-55cc407f7008","Type":"ContainerDied","Data":"9672c74106834488887371f3abd8dc008808dfebfb32e7c8c71e350dce845de8"} Dec 03 00:09:03 crc kubenswrapper[4805]: I1203 00:09:03.022878 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9672c74106834488887371f3abd8dc008808dfebfb32e7c8c71e350dce845de8" Dec 03 00:09:03 crc kubenswrapper[4805]: I1203 00:09:03.038194 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-q4nqx" podStartSLOduration=148.038139306 podStartE2EDuration="2m28.038139306s" podCreationTimestamp="2025-12-03 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:09:03.03354027 +0000 UTC m=+166.882502896" watchObservedRunningTime="2025-12-03 00:09:03.038139306 +0000 UTC m=+166.887101942" Dec 03 00:09:03 crc kubenswrapper[4805]: I1203 00:09:03.138277 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:09:03 crc kubenswrapper[4805]: I1203 00:09:03.149172 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-d6jmb" Dec 03 00:09:03 crc kubenswrapper[4805]: I1203 00:09:03.491127 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-995bc" Dec 03 00:09:03 crc kubenswrapper[4805]: I1203 00:09:03.495677 4805 patch_prober.go:28] interesting pod/console-f9d7485db-fb2qm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 03 00:09:03 crc kubenswrapper[4805]: I1203 00:09:03.495733 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fb2qm" podUID="9594ceca-a9f4-497a-876c-845411320228" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 03 00:09:03 crc kubenswrapper[4805]: I1203 00:09:03.499581 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:03 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:09:03 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:03 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:03 crc kubenswrapper[4805]: I1203 00:09:03.499684 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:04 crc kubenswrapper[4805]: I1203 00:09:04.185412 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" Dec 03 00:09:04 crc kubenswrapper[4805]: I1203 00:09:04.497863 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:04 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:09:04 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:04 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:04 crc kubenswrapper[4805]: I1203 00:09:04.497950 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:05 crc kubenswrapper[4805]: I1203 00:09:05.520839 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:05 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:09:05 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:05 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:05 crc kubenswrapper[4805]: I1203 00:09:05.520945 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:06 crc kubenswrapper[4805]: I1203 00:09:06.499066 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:06 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:09:06 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:06 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:06 crc kubenswrapper[4805]: I1203 00:09:06.499215 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:07 crc kubenswrapper[4805]: I1203 00:09:07.499442 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:07 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:09:07 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:07 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:07 crc kubenswrapper[4805]: I1203 00:09:07.499527 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:08 crc kubenswrapper[4805]: I1203 00:09:08.568888 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:08 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:09:08 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:08 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:08 crc kubenswrapper[4805]: I1203 00:09:08.569077 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:09 crc kubenswrapper[4805]: I1203 00:09:09.547620 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:09 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:09:09 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:09 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:09 crc kubenswrapper[4805]: I1203 00:09:09.547714 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:10 crc kubenswrapper[4805]: I1203 00:09:10.497799 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:10 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:09:10 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:10 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:10 crc kubenswrapper[4805]: I1203 00:09:10.498262 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:11 crc kubenswrapper[4805]: I1203 00:09:11.503492 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:11 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:09:11 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:11 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:11 crc kubenswrapper[4805]: I1203 00:09:11.503590 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:12 crc kubenswrapper[4805]: I1203 00:09:12.461811 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqrgj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 03 00:09:12 crc kubenswrapper[4805]: I1203 00:09:12.461895 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hqrgj" podUID="57c96cff-592a-47c8-a038-6bb23bac6aa5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 03 00:09:12 crc kubenswrapper[4805]: I1203 00:09:12.462385 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqrgj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 03 00:09:12 crc kubenswrapper[4805]: I1203 00:09:12.462407 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hqrgj" podUID="57c96cff-592a-47c8-a038-6bb23bac6aa5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 03 00:09:12 crc kubenswrapper[4805]: I1203 00:09:12.462477 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-hqrgj" Dec 03 00:09:12 crc kubenswrapper[4805]: I1203 00:09:12.463266 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"3c97aeb4d00baf42cc57d9937169feaf99215de340a119e9b371d2d22313cbb4"} pod="openshift-console/downloads-7954f5f757-hqrgj" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 03 00:09:12 crc kubenswrapper[4805]: I1203 00:09:12.463389 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-hqrgj" podUID="57c96cff-592a-47c8-a038-6bb23bac6aa5" containerName="download-server" containerID="cri-o://3c97aeb4d00baf42cc57d9937169feaf99215de340a119e9b371d2d22313cbb4" gracePeriod=2 Dec 03 00:09:12 crc kubenswrapper[4805]: I1203 00:09:12.463877 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqrgj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 03 00:09:12 crc kubenswrapper[4805]: I1203 00:09:12.463898 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hqrgj" podUID="57c96cff-592a-47c8-a038-6bb23bac6aa5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 03 00:09:12 crc kubenswrapper[4805]: I1203 00:09:12.502328 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:12 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:09:12 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:12 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:12 crc kubenswrapper[4805]: I1203 00:09:12.502394 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:13 crc kubenswrapper[4805]: I1203 00:09:13.495687 4805 patch_prober.go:28] interesting pod/console-f9d7485db-fb2qm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 03 00:09:13 crc kubenswrapper[4805]: I1203 00:09:13.496431 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fb2qm" podUID="9594ceca-a9f4-497a-876c-845411320228" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 03 00:09:13 crc kubenswrapper[4805]: I1203 00:09:13.498099 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:13 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:09:13 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:13 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:13 crc kubenswrapper[4805]: I1203 00:09:13.498180 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:14 crc kubenswrapper[4805]: I1203 00:09:14.281393 4805 generic.go:334] "Generic (PLEG): container finished" podID="57c96cff-592a-47c8-a038-6bb23bac6aa5" containerID="3c97aeb4d00baf42cc57d9937169feaf99215de340a119e9b371d2d22313cbb4" exitCode=0 Dec 03 00:09:14 crc kubenswrapper[4805]: I1203 00:09:14.281442 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hqrgj" event={"ID":"57c96cff-592a-47c8-a038-6bb23bac6aa5","Type":"ContainerDied","Data":"3c97aeb4d00baf42cc57d9937169feaf99215de340a119e9b371d2d22313cbb4"} Dec 03 00:09:14 crc kubenswrapper[4805]: I1203 00:09:14.497757 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:14 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:09:14 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:14 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:14 crc kubenswrapper[4805]: I1203 00:09:14.497841 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:15 crc kubenswrapper[4805]: I1203 00:09:15.516558 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:15 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:09:15 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:15 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:15 crc kubenswrapper[4805]: I1203 00:09:15.516623 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:16 crc kubenswrapper[4805]: I1203 00:09:16.499332 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:16 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:09:16 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:16 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:16 crc kubenswrapper[4805]: I1203 00:09:16.499394 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:17 crc kubenswrapper[4805]: I1203 00:09:17.498641 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:17 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:09:17 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:17 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:17 crc kubenswrapper[4805]: I1203 00:09:17.498986 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:17 crc kubenswrapper[4805]: I1203 00:09:17.850073 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:09:17 crc kubenswrapper[4805]: I1203 00:09:17.850260 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:09:17 crc kubenswrapper[4805]: I1203 00:09:17.918011 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:09:18 crc kubenswrapper[4805]: I1203 00:09:18.497554 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:18 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:09:18 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:18 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:18 crc kubenswrapper[4805]: I1203 00:09:18.497621 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:19 crc kubenswrapper[4805]: I1203 00:09:19.502061 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:19 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:09:19 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:19 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:19 crc kubenswrapper[4805]: I1203 00:09:19.502485 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:20 crc kubenswrapper[4805]: I1203 00:09:20.497900 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:20 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:09:20 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:20 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:20 crc kubenswrapper[4805]: I1203 00:09:20.497996 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:21 crc kubenswrapper[4805]: I1203 00:09:21.497810 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:21 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 03 00:09:21 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:21 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:21 crc kubenswrapper[4805]: I1203 00:09:21.497906 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:22 crc kubenswrapper[4805]: I1203 00:09:22.460478 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqrgj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 03 00:09:22 crc kubenswrapper[4805]: I1203 00:09:22.460562 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hqrgj" podUID="57c96cff-592a-47c8-a038-6bb23bac6aa5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 03 00:09:22 crc kubenswrapper[4805]: I1203 00:09:22.497397 4805 patch_prober.go:28] interesting pod/router-default-5444994796-njft5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 00:09:22 crc kubenswrapper[4805]: [+]has-synced ok Dec 03 00:09:22 crc kubenswrapper[4805]: [+]process-running ok Dec 03 00:09:22 crc kubenswrapper[4805]: healthz check failed Dec 03 00:09:22 crc kubenswrapper[4805]: I1203 00:09:22.497998 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-njft5" podUID="dcd4dd18-f71c-47de-be9e-7648df9eed36" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 00:09:22 crc kubenswrapper[4805]: I1203 00:09:22.943763 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 00:09:23 crc kubenswrapper[4805]: I1203 00:09:23.495501 4805 patch_prober.go:28] interesting pod/console-f9d7485db-fb2qm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 03 00:09:23 crc kubenswrapper[4805]: I1203 00:09:23.495561 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fb2qm" podUID="9594ceca-a9f4-497a-876c-845411320228" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 03 00:09:23 crc kubenswrapper[4805]: I1203 00:09:23.497580 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-njft5" Dec 03 00:09:23 crc kubenswrapper[4805]: I1203 00:09:23.500234 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-njft5" Dec 03 00:09:23 crc kubenswrapper[4805]: I1203 00:09:23.866040 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6nkz" Dec 03 00:09:27 crc kubenswrapper[4805]: I1203 00:09:27.364565 4805 generic.go:334] "Generic (PLEG): container finished" podID="54f1e878-22b1-43ab-9225-7212ec9633e7" containerID="f8d443565a73af39e266434f0c6565b2b648675acbec22ecb7b447fa691617f9" exitCode=0 Dec 03 00:09:27 crc kubenswrapper[4805]: I1203 00:09:27.364644 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29412000-pvwqn" event={"ID":"54f1e878-22b1-43ab-9225-7212ec9633e7","Type":"ContainerDied","Data":"f8d443565a73af39e266434f0c6565b2b648675acbec22ecb7b447fa691617f9"} Dec 03 00:09:32 crc kubenswrapper[4805]: I1203 00:09:32.462434 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqrgj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 03 00:09:32 crc kubenswrapper[4805]: I1203 00:09:32.462521 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hqrgj" podUID="57c96cff-592a-47c8-a038-6bb23bac6aa5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 03 00:09:33 crc kubenswrapper[4805]: I1203 00:09:33.499240 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:09:33 crc kubenswrapper[4805]: I1203 00:09:33.504055 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-fb2qm" Dec 03 00:09:35 crc kubenswrapper[4805]: I1203 00:09:35.441646 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 03 00:09:35 crc kubenswrapper[4805]: E1203 00:09:35.442854 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd1a9d56-21a3-450e-b9af-fc132ee10466" containerName="collect-profiles" Dec 03 00:09:35 crc kubenswrapper[4805]: I1203 00:09:35.443015 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd1a9d56-21a3-450e-b9af-fc132ee10466" containerName="collect-profiles" Dec 03 00:09:35 crc kubenswrapper[4805]: E1203 00:09:35.443111 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3149db2-454f-4e09-839a-95d7bd8d9ca6" containerName="pruner" Dec 03 00:09:35 crc kubenswrapper[4805]: I1203 00:09:35.443225 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3149db2-454f-4e09-839a-95d7bd8d9ca6" containerName="pruner" Dec 03 00:09:35 crc kubenswrapper[4805]: E1203 00:09:35.443755 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e53ce0c7-3553-44b0-8c98-55cc407f7008" containerName="pruner" Dec 03 00:09:35 crc kubenswrapper[4805]: I1203 00:09:35.443851 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e53ce0c7-3553-44b0-8c98-55cc407f7008" containerName="pruner" Dec 03 00:09:35 crc kubenswrapper[4805]: I1203 00:09:35.444093 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e53ce0c7-3553-44b0-8c98-55cc407f7008" containerName="pruner" Dec 03 00:09:35 crc kubenswrapper[4805]: I1203 00:09:35.445587 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3149db2-454f-4e09-839a-95d7bd8d9ca6" containerName="pruner" Dec 03 00:09:35 crc kubenswrapper[4805]: I1203 00:09:35.445711 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd1a9d56-21a3-450e-b9af-fc132ee10466" containerName="collect-profiles" Dec 03 00:09:35 crc kubenswrapper[4805]: I1203 00:09:35.446219 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 00:09:35 crc kubenswrapper[4805]: I1203 00:09:35.452792 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 00:09:35 crc kubenswrapper[4805]: I1203 00:09:35.452794 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 03 00:09:35 crc kubenswrapper[4805]: I1203 00:09:35.454674 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 03 00:09:35 crc kubenswrapper[4805]: I1203 00:09:35.473759 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 00:09:35 crc kubenswrapper[4805]: I1203 00:09:35.473827 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 00:09:35 crc kubenswrapper[4805]: I1203 00:09:35.574630 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 00:09:35 crc kubenswrapper[4805]: I1203 00:09:35.574768 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 00:09:35 crc kubenswrapper[4805]: I1203 00:09:35.575064 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 00:09:35 crc kubenswrapper[4805]: I1203 00:09:35.596311 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 00:09:35 crc kubenswrapper[4805]: I1203 00:09:35.773410 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 00:09:38 crc kubenswrapper[4805]: I1203 00:09:38.477840 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29412000-pvwqn" Dec 03 00:09:38 crc kubenswrapper[4805]: I1203 00:09:38.541466 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/54f1e878-22b1-43ab-9225-7212ec9633e7-serviceca\") pod \"54f1e878-22b1-43ab-9225-7212ec9633e7\" (UID: \"54f1e878-22b1-43ab-9225-7212ec9633e7\") " Dec 03 00:09:38 crc kubenswrapper[4805]: I1203 00:09:38.541544 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thm9p\" (UniqueName: \"kubernetes.io/projected/54f1e878-22b1-43ab-9225-7212ec9633e7-kube-api-access-thm9p\") pod \"54f1e878-22b1-43ab-9225-7212ec9633e7\" (UID: \"54f1e878-22b1-43ab-9225-7212ec9633e7\") " Dec 03 00:09:38 crc kubenswrapper[4805]: I1203 00:09:38.542802 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54f1e878-22b1-43ab-9225-7212ec9633e7-serviceca" (OuterVolumeSpecName: "serviceca") pod "54f1e878-22b1-43ab-9225-7212ec9633e7" (UID: "54f1e878-22b1-43ab-9225-7212ec9633e7"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:09:38 crc kubenswrapper[4805]: I1203 00:09:38.559448 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54f1e878-22b1-43ab-9225-7212ec9633e7-kube-api-access-thm9p" (OuterVolumeSpecName: "kube-api-access-thm9p") pod "54f1e878-22b1-43ab-9225-7212ec9633e7" (UID: "54f1e878-22b1-43ab-9225-7212ec9633e7"). InnerVolumeSpecName "kube-api-access-thm9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:09:38 crc kubenswrapper[4805]: I1203 00:09:38.643356 4805 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/54f1e878-22b1-43ab-9225-7212ec9633e7-serviceca\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:38 crc kubenswrapper[4805]: I1203 00:09:38.643404 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thm9p\" (UniqueName: \"kubernetes.io/projected/54f1e878-22b1-43ab-9225-7212ec9633e7-kube-api-access-thm9p\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:38 crc kubenswrapper[4805]: I1203 00:09:38.653478 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29412000-pvwqn" event={"ID":"54f1e878-22b1-43ab-9225-7212ec9633e7","Type":"ContainerDied","Data":"6d6266b28de12cc4ff50f57a0db1b7859d5e6522425d25ef639ffaf01e0d4e9f"} Dec 03 00:09:38 crc kubenswrapper[4805]: I1203 00:09:38.653525 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d6266b28de12cc4ff50f57a0db1b7859d5e6522425d25ef639ffaf01e0d4e9f" Dec 03 00:09:38 crc kubenswrapper[4805]: I1203 00:09:38.653543 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29412000-pvwqn" Dec 03 00:09:40 crc kubenswrapper[4805]: I1203 00:09:40.261025 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 03 00:09:40 crc kubenswrapper[4805]: E1203 00:09:40.261385 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f1e878-22b1-43ab-9225-7212ec9633e7" containerName="image-pruner" Dec 03 00:09:40 crc kubenswrapper[4805]: I1203 00:09:40.261403 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f1e878-22b1-43ab-9225-7212ec9633e7" containerName="image-pruner" Dec 03 00:09:40 crc kubenswrapper[4805]: I1203 00:09:40.261575 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="54f1e878-22b1-43ab-9225-7212ec9633e7" containerName="image-pruner" Dec 03 00:09:40 crc kubenswrapper[4805]: I1203 00:09:40.262191 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:40 crc kubenswrapper[4805]: I1203 00:09:40.272775 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 03 00:09:40 crc kubenswrapper[4805]: I1203 00:09:40.384669 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85618e2d-61ea-4ef3-ab0b-205f236bd9c8-kube-api-access\") pod \"installer-9-crc\" (UID: \"85618e2d-61ea-4ef3-ab0b-205f236bd9c8\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:40 crc kubenswrapper[4805]: I1203 00:09:40.384749 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85618e2d-61ea-4ef3-ab0b-205f236bd9c8-var-lock\") pod \"installer-9-crc\" (UID: \"85618e2d-61ea-4ef3-ab0b-205f236bd9c8\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:40 crc kubenswrapper[4805]: I1203 00:09:40.384778 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85618e2d-61ea-4ef3-ab0b-205f236bd9c8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"85618e2d-61ea-4ef3-ab0b-205f236bd9c8\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:40 crc kubenswrapper[4805]: I1203 00:09:40.486582 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85618e2d-61ea-4ef3-ab0b-205f236bd9c8-var-lock\") pod \"installer-9-crc\" (UID: \"85618e2d-61ea-4ef3-ab0b-205f236bd9c8\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:40 crc kubenswrapper[4805]: I1203 00:09:40.486980 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85618e2d-61ea-4ef3-ab0b-205f236bd9c8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"85618e2d-61ea-4ef3-ab0b-205f236bd9c8\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:40 crc kubenswrapper[4805]: I1203 00:09:40.487135 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85618e2d-61ea-4ef3-ab0b-205f236bd9c8-kube-api-access\") pod \"installer-9-crc\" (UID: \"85618e2d-61ea-4ef3-ab0b-205f236bd9c8\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:40 crc kubenswrapper[4805]: I1203 00:09:40.486694 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85618e2d-61ea-4ef3-ab0b-205f236bd9c8-var-lock\") pod \"installer-9-crc\" (UID: \"85618e2d-61ea-4ef3-ab0b-205f236bd9c8\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:40 crc kubenswrapper[4805]: I1203 00:09:40.487473 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85618e2d-61ea-4ef3-ab0b-205f236bd9c8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"85618e2d-61ea-4ef3-ab0b-205f236bd9c8\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:40 crc kubenswrapper[4805]: I1203 00:09:40.503537 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85618e2d-61ea-4ef3-ab0b-205f236bd9c8-kube-api-access\") pod \"installer-9-crc\" (UID: \"85618e2d-61ea-4ef3-ab0b-205f236bd9c8\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:40 crc kubenswrapper[4805]: I1203 00:09:40.585621 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:09:42 crc kubenswrapper[4805]: I1203 00:09:42.463242 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqrgj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 03 00:09:42 crc kubenswrapper[4805]: I1203 00:09:42.464398 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hqrgj" podUID="57c96cff-592a-47c8-a038-6bb23bac6aa5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 03 00:09:42 crc kubenswrapper[4805]: E1203 00:09:42.489670 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 03 00:09:42 crc kubenswrapper[4805]: E1203 00:09:42.489882 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xfc64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-htqft_openshift-marketplace(c6a69f1e-7226-4f13-8dce-fcfc6c25f240): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:09:42 crc kubenswrapper[4805]: E1203 00:09:42.491228 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-htqft" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" Dec 03 00:09:42 crc kubenswrapper[4805]: E1203 00:09:42.513983 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 03 00:09:42 crc kubenswrapper[4805]: E1203 00:09:42.514345 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6dlzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6gm8g_openshift-marketplace(beeb713a-2089-47a0-bda3-e51a217f0f5e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:09:42 crc kubenswrapper[4805]: E1203 00:09:42.515632 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-6gm8g" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" Dec 03 00:09:44 crc kubenswrapper[4805]: E1203 00:09:44.244268 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-6gm8g" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" Dec 03 00:09:44 crc kubenswrapper[4805]: E1203 00:09:44.244622 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-htqft" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" Dec 03 00:09:44 crc kubenswrapper[4805]: E1203 00:09:44.319795 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 03 00:09:44 crc kubenswrapper[4805]: E1203 00:09:44.319977 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7dxvr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-h9bzd_openshift-marketplace(a653b1e4-a669-4c68-abdb-99686a4b39eb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:09:44 crc kubenswrapper[4805]: E1203 00:09:44.321245 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-h9bzd" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" Dec 03 00:09:47 crc kubenswrapper[4805]: I1203 00:09:47.811430 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:09:47 crc kubenswrapper[4805]: I1203 00:09:47.811953 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:09:47 crc kubenswrapper[4805]: I1203 00:09:47.812023 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:09:47 crc kubenswrapper[4805]: I1203 00:09:47.812636 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37"} pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:09:47 crc kubenswrapper[4805]: I1203 00:09:47.812711 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" containerID="cri-o://7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37" gracePeriod=600 Dec 03 00:09:49 crc kubenswrapper[4805]: E1203 00:09:49.537456 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-h9bzd" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" Dec 03 00:09:49 crc kubenswrapper[4805]: E1203 00:09:49.600350 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 03 00:09:49 crc kubenswrapper[4805]: E1203 00:09:49.600600 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bsjz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zd9dv_openshift-marketplace(cb20f5d6-0456-4fb8-8435-407ccfc9319f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:09:49 crc kubenswrapper[4805]: E1203 00:09:49.603820 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zd9dv" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" Dec 03 00:09:49 crc kubenswrapper[4805]: I1203 00:09:49.738767 4805 generic.go:334] "Generic (PLEG): container finished" podID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerID="7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37" exitCode=0 Dec 03 00:09:49 crc kubenswrapper[4805]: I1203 00:09:49.739119 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" event={"ID":"42d6da4d-d781-4243-b5c3-28a8cf91ef53","Type":"ContainerDied","Data":"7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37"} Dec 03 00:09:50 crc kubenswrapper[4805]: E1203 00:09:50.520497 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zd9dv" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" Dec 03 00:09:50 crc kubenswrapper[4805]: E1203 00:09:50.611517 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 03 00:09:50 crc kubenswrapper[4805]: E1203 00:09:50.611974 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ql9tl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-9hlqz_openshift-marketplace(788046c6-ddf0-4f22-bc41-260efe363420): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:09:50 crc kubenswrapper[4805]: E1203 00:09:50.613151 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-9hlqz" podUID="788046c6-ddf0-4f22-bc41-260efe363420" Dec 03 00:09:50 crc kubenswrapper[4805]: E1203 00:09:50.671970 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 03 00:09:50 crc kubenswrapper[4805]: E1203 00:09:50.672549 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c48zp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-26vk7_openshift-marketplace(809c4c09-4e7d-40b5-9964-7b09c2a19ea5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:09:50 crc kubenswrapper[4805]: E1203 00:09:50.673862 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-26vk7" podUID="809c4c09-4e7d-40b5-9964-7b09c2a19ea5" Dec 03 00:09:50 crc kubenswrapper[4805]: E1203 00:09:50.708180 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 03 00:09:50 crc kubenswrapper[4805]: E1203 00:09:50.708367 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h49r9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vtztk_openshift-marketplace(2d6f0e6f-57cf-4586-8fe7-0df5145d4f33): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:09:50 crc kubenswrapper[4805]: E1203 00:09:50.709586 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vtztk" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" Dec 03 00:09:50 crc kubenswrapper[4805]: E1203 00:09:50.720812 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 03 00:09:50 crc kubenswrapper[4805]: E1203 00:09:50.721052 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scq7s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-r647s_openshift-marketplace(b4f086f6-ed16-47f7-a630-f480a86f4954): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:09:50 crc kubenswrapper[4805]: E1203 00:09:50.722600 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-r647s" podUID="b4f086f6-ed16-47f7-a630-f480a86f4954" Dec 03 00:09:50 crc kubenswrapper[4805]: I1203 00:09:50.757329 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" event={"ID":"42d6da4d-d781-4243-b5c3-28a8cf91ef53","Type":"ContainerStarted","Data":"b197c067b1f4b6da9cb594f04bc4f3715facaffce52939947e8f8684e3a78115"} Dec 03 00:09:50 crc kubenswrapper[4805]: E1203 00:09:50.760454 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vtztk" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" Dec 03 00:09:50 crc kubenswrapper[4805]: E1203 00:09:50.760504 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-r647s" podUID="b4f086f6-ed16-47f7-a630-f480a86f4954" Dec 03 00:09:50 crc kubenswrapper[4805]: E1203 00:09:50.760889 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-9hlqz" podUID="788046c6-ddf0-4f22-bc41-260efe363420" Dec 03 00:09:50 crc kubenswrapper[4805]: E1203 00:09:50.763151 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-26vk7" podUID="809c4c09-4e7d-40b5-9964-7b09c2a19ea5" Dec 03 00:09:51 crc kubenswrapper[4805]: I1203 00:09:51.011482 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 03 00:09:51 crc kubenswrapper[4805]: I1203 00:09:51.018298 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 03 00:09:51 crc kubenswrapper[4805]: I1203 00:09:51.764879 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3","Type":"ContainerStarted","Data":"1fdfb9f802f2ab5df8a2f7e27625d9b1817257a6ecec329d84d53475e30255aa"} Dec 03 00:09:51 crc kubenswrapper[4805]: I1203 00:09:51.765597 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3","Type":"ContainerStarted","Data":"9f5e7cd0e38dcd47642af5135973142f6770cf9fa79e225b339c2e2a95c578e1"} Dec 03 00:09:51 crc kubenswrapper[4805]: I1203 00:09:51.767404 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"85618e2d-61ea-4ef3-ab0b-205f236bd9c8","Type":"ContainerStarted","Data":"8d58f6744150c47988548350ce23fba24ba230478a7e2bc45ddf5cd013181c8b"} Dec 03 00:09:51 crc kubenswrapper[4805]: I1203 00:09:51.767430 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"85618e2d-61ea-4ef3-ab0b-205f236bd9c8","Type":"ContainerStarted","Data":"240e70bde4fd6677ea68ec1edf89d0429367e928be8c2c88bbc09b1aa58c562b"} Dec 03 00:09:51 crc kubenswrapper[4805]: I1203 00:09:51.769603 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hqrgj" event={"ID":"57c96cff-592a-47c8-a038-6bb23bac6aa5","Type":"ContainerStarted","Data":"511e19a367d70ad25f84a70f6e0d90d761edf812bc00847f71cd9e01aacf651c"} Dec 03 00:09:51 crc kubenswrapper[4805]: I1203 00:09:51.770082 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqrgj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 03 00:09:51 crc kubenswrapper[4805]: I1203 00:09:51.770119 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hqrgj" podUID="57c96cff-592a-47c8-a038-6bb23bac6aa5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 03 00:09:51 crc kubenswrapper[4805]: I1203 00:09:51.770722 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hqrgj" Dec 03 00:09:51 crc kubenswrapper[4805]: I1203 00:09:51.798689 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=16.798671047 podStartE2EDuration="16.798671047s" podCreationTimestamp="2025-12-03 00:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:09:51.784187482 +0000 UTC m=+215.633150108" watchObservedRunningTime="2025-12-03 00:09:51.798671047 +0000 UTC m=+215.647633653" Dec 03 00:09:51 crc kubenswrapper[4805]: I1203 00:09:51.818694 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=11.818673323 podStartE2EDuration="11.818673323s" podCreationTimestamp="2025-12-03 00:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:09:51.812969269 +0000 UTC m=+215.661931875" watchObservedRunningTime="2025-12-03 00:09:51.818673323 +0000 UTC m=+215.667635929" Dec 03 00:09:52 crc kubenswrapper[4805]: I1203 00:09:52.460503 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqrgj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 03 00:09:52 crc kubenswrapper[4805]: I1203 00:09:52.460803 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hqrgj" podUID="57c96cff-592a-47c8-a038-6bb23bac6aa5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 03 00:09:52 crc kubenswrapper[4805]: I1203 00:09:52.460521 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqrgj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 03 00:09:52 crc kubenswrapper[4805]: I1203 00:09:52.460898 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hqrgj" podUID="57c96cff-592a-47c8-a038-6bb23bac6aa5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 03 00:09:52 crc kubenswrapper[4805]: I1203 00:09:52.777226 4805 generic.go:334] "Generic (PLEG): container finished" podID="1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3" containerID="1fdfb9f802f2ab5df8a2f7e27625d9b1817257a6ecec329d84d53475e30255aa" exitCode=0 Dec 03 00:09:52 crc kubenswrapper[4805]: I1203 00:09:52.778574 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3","Type":"ContainerDied","Data":"1fdfb9f802f2ab5df8a2f7e27625d9b1817257a6ecec329d84d53475e30255aa"} Dec 03 00:09:52 crc kubenswrapper[4805]: I1203 00:09:52.778598 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqrgj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 03 00:09:52 crc kubenswrapper[4805]: I1203 00:09:52.779034 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hqrgj" podUID="57c96cff-592a-47c8-a038-6bb23bac6aa5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 03 00:09:53 crc kubenswrapper[4805]: I1203 00:09:53.783543 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqrgj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 03 00:09:53 crc kubenswrapper[4805]: I1203 00:09:53.783961 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hqrgj" podUID="57c96cff-592a-47c8-a038-6bb23bac6aa5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 03 00:09:54 crc kubenswrapper[4805]: I1203 00:09:54.259768 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 00:09:54 crc kubenswrapper[4805]: I1203 00:09:54.443100 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3-kubelet-dir\") pod \"1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3\" (UID: \"1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3\") " Dec 03 00:09:54 crc kubenswrapper[4805]: I1203 00:09:54.443255 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3" (UID: "1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:09:54 crc kubenswrapper[4805]: I1203 00:09:54.443648 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3-kube-api-access\") pod \"1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3\" (UID: \"1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3\") " Dec 03 00:09:54 crc kubenswrapper[4805]: I1203 00:09:54.444319 4805 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:54 crc kubenswrapper[4805]: I1203 00:09:54.449447 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3" (UID: "1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:09:54 crc kubenswrapper[4805]: I1203 00:09:54.545386 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 00:09:54 crc kubenswrapper[4805]: I1203 00:09:54.792110 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3","Type":"ContainerDied","Data":"9f5e7cd0e38dcd47642af5135973142f6770cf9fa79e225b339c2e2a95c578e1"} Dec 03 00:09:54 crc kubenswrapper[4805]: I1203 00:09:54.792167 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f5e7cd0e38dcd47642af5135973142f6770cf9fa79e225b339c2e2a95c578e1" Dec 03 00:09:54 crc kubenswrapper[4805]: I1203 00:09:54.792275 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 00:09:55 crc kubenswrapper[4805]: I1203 00:09:55.925325 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sz46g"] Dec 03 00:09:58 crc kubenswrapper[4805]: I1203 00:09:58.835021 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6gm8g" event={"ID":"beeb713a-2089-47a0-bda3-e51a217f0f5e","Type":"ContainerStarted","Data":"6e4893b7cd66d5bed98698c28d7a783b04e9b8abde7662bf0e38a3ecde69959b"} Dec 03 00:10:00 crc kubenswrapper[4805]: I1203 00:10:00.861517 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htqft" event={"ID":"c6a69f1e-7226-4f13-8dce-fcfc6c25f240","Type":"ContainerStarted","Data":"00d83c0786c6ef8c3e6e5efdd61d9ecafc5cac0e410a459352addfa72f8dfa54"} Dec 03 00:10:02 crc kubenswrapper[4805]: I1203 00:10:02.460487 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqrgj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 03 00:10:02 crc kubenswrapper[4805]: I1203 00:10:02.460918 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hqrgj" podUID="57c96cff-592a-47c8-a038-6bb23bac6aa5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 03 00:10:02 crc kubenswrapper[4805]: I1203 00:10:02.460488 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqrgj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 03 00:10:02 crc kubenswrapper[4805]: I1203 00:10:02.461059 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hqrgj" podUID="57c96cff-592a-47c8-a038-6bb23bac6aa5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 03 00:10:05 crc kubenswrapper[4805]: I1203 00:10:05.917417 4805 generic.go:334] "Generic (PLEG): container finished" podID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" containerID="00d83c0786c6ef8c3e6e5efdd61d9ecafc5cac0e410a459352addfa72f8dfa54" exitCode=0 Dec 03 00:10:05 crc kubenswrapper[4805]: I1203 00:10:05.917489 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htqft" event={"ID":"c6a69f1e-7226-4f13-8dce-fcfc6c25f240","Type":"ContainerDied","Data":"00d83c0786c6ef8c3e6e5efdd61d9ecafc5cac0e410a459352addfa72f8dfa54"} Dec 03 00:10:05 crc kubenswrapper[4805]: I1203 00:10:05.922359 4805 generic.go:334] "Generic (PLEG): container finished" podID="beeb713a-2089-47a0-bda3-e51a217f0f5e" containerID="6e4893b7cd66d5bed98698c28d7a783b04e9b8abde7662bf0e38a3ecde69959b" exitCode=0 Dec 03 00:10:05 crc kubenswrapper[4805]: I1203 00:10:05.922505 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6gm8g" event={"ID":"beeb713a-2089-47a0-bda3-e51a217f0f5e","Type":"ContainerDied","Data":"6e4893b7cd66d5bed98698c28d7a783b04e9b8abde7662bf0e38a3ecde69959b"} Dec 03 00:10:12 crc kubenswrapper[4805]: I1203 00:10:12.481438 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-hqrgj" Dec 03 00:10:21 crc kubenswrapper[4805]: I1203 00:10:21.033471 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" podUID="467b4db8-19ee-4476-b72a-158547e24884" containerName="oauth-openshift" containerID="cri-o://eac0ab83ce3b1e5cacfc6355cdaf7c80768008f18475ab0fb432671b30076cd1" gracePeriod=15 Dec 03 00:10:22 crc kubenswrapper[4805]: I1203 00:10:22.028999 4805 generic.go:334] "Generic (PLEG): container finished" podID="467b4db8-19ee-4476-b72a-158547e24884" containerID="eac0ab83ce3b1e5cacfc6355cdaf7c80768008f18475ab0fb432671b30076cd1" exitCode=0 Dec 03 00:10:22 crc kubenswrapper[4805]: I1203 00:10:22.029095 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" event={"ID":"467b4db8-19ee-4476-b72a-158547e24884","Type":"ContainerDied","Data":"eac0ab83ce3b1e5cacfc6355cdaf7c80768008f18475ab0fb432671b30076cd1"} Dec 03 00:10:22 crc kubenswrapper[4805]: I1203 00:10:22.429627 4805 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-sz46g container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Dec 03 00:10:22 crc kubenswrapper[4805]: I1203 00:10:22.429716 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" podUID="467b4db8-19ee-4476-b72a-158547e24884" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.217068 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.254843 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb"] Dec 03 00:10:24 crc kubenswrapper[4805]: E1203 00:10:24.255225 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3" containerName="pruner" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.255251 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3" containerName="pruner" Dec 03 00:10:24 crc kubenswrapper[4805]: E1203 00:10:24.255277 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467b4db8-19ee-4476-b72a-158547e24884" containerName="oauth-openshift" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.255286 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="467b4db8-19ee-4476-b72a-158547e24884" containerName="oauth-openshift" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.255400 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ce836ba-8e5e-4ca2-a8a4-66c2f5f67fd3" containerName="pruner" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.255413 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="467b4db8-19ee-4476-b72a-158547e24884" containerName="oauth-openshift" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.255935 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.286826 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-ocp-branding-template\") pod \"467b4db8-19ee-4476-b72a-158547e24884\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.286914 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-cliconfig\") pod \"467b4db8-19ee-4476-b72a-158547e24884\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.286959 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-serving-cert\") pod \"467b4db8-19ee-4476-b72a-158547e24884\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.286992 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/467b4db8-19ee-4476-b72a-158547e24884-audit-dir\") pod \"467b4db8-19ee-4476-b72a-158547e24884\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.287054 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-idp-0-file-data\") pod \"467b4db8-19ee-4476-b72a-158547e24884\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.287083 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-template-login\") pod \"467b4db8-19ee-4476-b72a-158547e24884\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.287142 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-trusted-ca-bundle\") pod \"467b4db8-19ee-4476-b72a-158547e24884\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.287275 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsl5w\" (UniqueName: \"kubernetes.io/projected/467b4db8-19ee-4476-b72a-158547e24884-kube-api-access-wsl5w\") pod \"467b4db8-19ee-4476-b72a-158547e24884\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.287341 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-router-certs\") pod \"467b4db8-19ee-4476-b72a-158547e24884\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.287402 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-session\") pod \"467b4db8-19ee-4476-b72a-158547e24884\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.287454 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-service-ca\") pod \"467b4db8-19ee-4476-b72a-158547e24884\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.287492 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-audit-policies\") pod \"467b4db8-19ee-4476-b72a-158547e24884\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.287524 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-template-error\") pod \"467b4db8-19ee-4476-b72a-158547e24884\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.287597 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-template-provider-selection\") pod \"467b4db8-19ee-4476-b72a-158547e24884\" (UID: \"467b4db8-19ee-4476-b72a-158547e24884\") " Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.288329 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "467b4db8-19ee-4476-b72a-158547e24884" (UID: "467b4db8-19ee-4476-b72a-158547e24884"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.288315 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/467b4db8-19ee-4476-b72a-158547e24884-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "467b4db8-19ee-4476-b72a-158547e24884" (UID: "467b4db8-19ee-4476-b72a-158547e24884"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.288400 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.288499 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.288588 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.288627 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-user-template-login\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.288659 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.288729 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-user-template-error\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.288771 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.288838 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.288896 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-system-session\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.288948 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.289057 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl9xb\" (UniqueName: \"kubernetes.io/projected/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-kube-api-access-xl9xb\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.289313 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "467b4db8-19ee-4476-b72a-158547e24884" (UID: "467b4db8-19ee-4476-b72a-158547e24884"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.289403 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-audit-policies\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.290748 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "467b4db8-19ee-4476-b72a-158547e24884" (UID: "467b4db8-19ee-4476-b72a-158547e24884"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.292496 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "467b4db8-19ee-4476-b72a-158547e24884" (UID: "467b4db8-19ee-4476-b72a-158547e24884"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.292171 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-audit-dir\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.293608 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.293821 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.293845 4805 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.293867 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.293961 4805 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/467b4db8-19ee-4476-b72a-158547e24884-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.293973 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.296921 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "467b4db8-19ee-4476-b72a-158547e24884" (UID: "467b4db8-19ee-4476-b72a-158547e24884"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.297857 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/467b4db8-19ee-4476-b72a-158547e24884-kube-api-access-wsl5w" (OuterVolumeSpecName: "kube-api-access-wsl5w") pod "467b4db8-19ee-4476-b72a-158547e24884" (UID: "467b4db8-19ee-4476-b72a-158547e24884"). InnerVolumeSpecName "kube-api-access-wsl5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.298905 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "467b4db8-19ee-4476-b72a-158547e24884" (UID: "467b4db8-19ee-4476-b72a-158547e24884"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.304301 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "467b4db8-19ee-4476-b72a-158547e24884" (UID: "467b4db8-19ee-4476-b72a-158547e24884"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.309550 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "467b4db8-19ee-4476-b72a-158547e24884" (UID: "467b4db8-19ee-4476-b72a-158547e24884"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.310312 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "467b4db8-19ee-4476-b72a-158547e24884" (UID: "467b4db8-19ee-4476-b72a-158547e24884"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.311535 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "467b4db8-19ee-4476-b72a-158547e24884" (UID: "467b4db8-19ee-4476-b72a-158547e24884"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.311908 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "467b4db8-19ee-4476-b72a-158547e24884" (UID: "467b4db8-19ee-4476-b72a-158547e24884"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.311619 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "467b4db8-19ee-4476-b72a-158547e24884" (UID: "467b4db8-19ee-4476-b72a-158547e24884"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.314099 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb"] Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.395400 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.395563 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.395719 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.395784 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-user-template-login\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.395822 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.395891 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-user-template-error\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.395942 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.395982 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.396034 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-system-session\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.396073 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.396109 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl9xb\" (UniqueName: \"kubernetes.io/projected/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-kube-api-access-xl9xb\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.396152 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-audit-policies\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.396190 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-audit-dir\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.396264 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.396385 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.396411 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.396436 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsl5w\" (UniqueName: \"kubernetes.io/projected/467b4db8-19ee-4476-b72a-158547e24884-kube-api-access-wsl5w\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.396459 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.396567 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-audit-dir\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.396481 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.396752 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.396775 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.396944 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.396966 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/467b4db8-19ee-4476-b72a-158547e24884-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.399165 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.399212 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.399190 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-audit-policies\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.399998 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.405430 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-user-template-error\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.405771 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.406084 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-system-session\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.406042 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.407378 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.407984 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.408120 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.417348 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-v4-0-config-user-template-login\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.418828 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl9xb\" (UniqueName: \"kubernetes.io/projected/e213e793-0d28-43c9-9e9b-73c82c2f8a8f-kube-api-access-xl9xb\") pod \"oauth-openshift-6d4bd77db6-xrbhb\" (UID: \"e213e793-0d28-43c9-9e9b-73c82c2f8a8f\") " pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:24 crc kubenswrapper[4805]: I1203 00:10:24.589500 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:25 crc kubenswrapper[4805]: I1203 00:10:25.100554 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" event={"ID":"467b4db8-19ee-4476-b72a-158547e24884","Type":"ContainerDied","Data":"bfc0782d6c6b1b9ecd0939ea9aff7b5ff6e37e2167023700b4a7b822006e141e"} Dec 03 00:10:25 crc kubenswrapper[4805]: I1203 00:10:25.100622 4805 scope.go:117] "RemoveContainer" containerID="eac0ab83ce3b1e5cacfc6355cdaf7c80768008f18475ab0fb432671b30076cd1" Dec 03 00:10:25 crc kubenswrapper[4805]: I1203 00:10:25.100819 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sz46g" Dec 03 00:10:25 crc kubenswrapper[4805]: I1203 00:10:25.297087 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6gm8g" event={"ID":"beeb713a-2089-47a0-bda3-e51a217f0f5e","Type":"ContainerStarted","Data":"96218898bd6443b6ae7a7442834cec8d1afb736e2b9be4820cfa79a98a089cdd"} Dec 03 00:10:25 crc kubenswrapper[4805]: I1203 00:10:25.300165 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9hlqz" event={"ID":"788046c6-ddf0-4f22-bc41-260efe363420","Type":"ContainerStarted","Data":"0c693b68e607788327a71e0821793ea8977ba7dc8c1dd7653dc1dad06fa03416"} Dec 03 00:10:25 crc kubenswrapper[4805]: I1203 00:10:25.302793 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26vk7" event={"ID":"809c4c09-4e7d-40b5-9964-7b09c2a19ea5","Type":"ContainerStarted","Data":"3f31d7197a165a431635289a95981a8f9fb4396ff412ff601be71b9987254678"} Dec 03 00:10:25 crc kubenswrapper[4805]: I1203 00:10:25.306643 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zd9dv" event={"ID":"cb20f5d6-0456-4fb8-8435-407ccfc9319f","Type":"ContainerStarted","Data":"4b97199cdde73a713d0d305f570d3e3013abbf5dbe99f57ff995359e52eb0b4b"} Dec 03 00:10:25 crc kubenswrapper[4805]: I1203 00:10:25.309268 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9bzd" event={"ID":"a653b1e4-a669-4c68-abdb-99686a4b39eb","Type":"ContainerStarted","Data":"a52a42d126568d9371264ec806d4f71a860a9f07aa3c7103a74503ac5252af68"} Dec 03 00:10:25 crc kubenswrapper[4805]: I1203 00:10:25.313584 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtztk" event={"ID":"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33","Type":"ContainerStarted","Data":"50948eedfea00e37ddb7a3e1c54f1c53ada10831b6bd2294e8be0b8f2d2d1dcd"} Dec 03 00:10:25 crc kubenswrapper[4805]: I1203 00:10:25.316260 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r647s" event={"ID":"b4f086f6-ed16-47f7-a630-f480a86f4954","Type":"ContainerStarted","Data":"835498e3877620b59b67c40be8711db6661b59174a027c2daea8c17c4a9dd12c"} Dec 03 00:10:25 crc kubenswrapper[4805]: I1203 00:10:25.332020 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htqft" event={"ID":"c6a69f1e-7226-4f13-8dce-fcfc6c25f240","Type":"ContainerStarted","Data":"105f2a7bd585ad3b1527246e03750f01c68e5f95b7e355308a5bd70efb844dc8"} Dec 03 00:10:25 crc kubenswrapper[4805]: I1203 00:10:25.435672 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-htqft" podStartSLOduration=7.798764441 podStartE2EDuration="1m33.435650827s" podCreationTimestamp="2025-12-03 00:08:52 +0000 UTC" firstStartedPulling="2025-12-03 00:08:58.485864451 +0000 UTC m=+162.334827057" lastFinishedPulling="2025-12-03 00:10:24.122750837 +0000 UTC m=+247.971713443" observedRunningTime="2025-12-03 00:10:25.430171389 +0000 UTC m=+249.279134005" watchObservedRunningTime="2025-12-03 00:10:25.435650827 +0000 UTC m=+249.284613433" Dec 03 00:10:25 crc kubenswrapper[4805]: I1203 00:10:25.497484 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6gm8g" podStartSLOduration=7.813432451 podStartE2EDuration="1m33.497463378s" podCreationTimestamp="2025-12-03 00:08:52 +0000 UTC" firstStartedPulling="2025-12-03 00:08:58.485677495 +0000 UTC m=+162.334640101" lastFinishedPulling="2025-12-03 00:10:24.169708422 +0000 UTC m=+248.018671028" observedRunningTime="2025-12-03 00:10:25.494386091 +0000 UTC m=+249.343348707" watchObservedRunningTime="2025-12-03 00:10:25.497463378 +0000 UTC m=+249.346425984" Dec 03 00:10:25 crc kubenswrapper[4805]: I1203 00:10:25.631873 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sz46g"] Dec 03 00:10:25 crc kubenswrapper[4805]: I1203 00:10:25.634564 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sz46g"] Dec 03 00:10:25 crc kubenswrapper[4805]: I1203 00:10:25.703440 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb"] Dec 03 00:10:26 crc kubenswrapper[4805]: I1203 00:10:26.405150 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" event={"ID":"e213e793-0d28-43c9-9e9b-73c82c2f8a8f","Type":"ContainerStarted","Data":"5ac6f843274d55e158e15fa6de3d370aa5c8d03259dfb791d02fc7d2665d9cf2"} Dec 03 00:10:26 crc kubenswrapper[4805]: I1203 00:10:26.408007 4805 generic.go:334] "Generic (PLEG): container finished" podID="788046c6-ddf0-4f22-bc41-260efe363420" containerID="0c693b68e607788327a71e0821793ea8977ba7dc8c1dd7653dc1dad06fa03416" exitCode=0 Dec 03 00:10:26 crc kubenswrapper[4805]: I1203 00:10:26.408047 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9hlqz" event={"ID":"788046c6-ddf0-4f22-bc41-260efe363420","Type":"ContainerDied","Data":"0c693b68e607788327a71e0821793ea8977ba7dc8c1dd7653dc1dad06fa03416"} Dec 03 00:10:26 crc kubenswrapper[4805]: I1203 00:10:26.603283 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="467b4db8-19ee-4476-b72a-158547e24884" path="/var/lib/kubelet/pods/467b4db8-19ee-4476-b72a-158547e24884/volumes" Dec 03 00:10:27 crc kubenswrapper[4805]: I1203 00:10:27.475893 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" event={"ID":"e213e793-0d28-43c9-9e9b-73c82c2f8a8f","Type":"ContainerStarted","Data":"af564a85cd844c0ccd1701e58a442ac9df9042cccc870d5a737756cb5e1e35f9"} Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.493343 4805 generic.go:334] "Generic (PLEG): container finished" podID="809c4c09-4e7d-40b5-9964-7b09c2a19ea5" containerID="3f31d7197a165a431635289a95981a8f9fb4396ff412ff601be71b9987254678" exitCode=0 Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.493443 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26vk7" event={"ID":"809c4c09-4e7d-40b5-9964-7b09c2a19ea5","Type":"ContainerDied","Data":"3f31d7197a165a431635289a95981a8f9fb4396ff412ff601be71b9987254678"} Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.506000 4805 generic.go:334] "Generic (PLEG): container finished" podID="b4f086f6-ed16-47f7-a630-f480a86f4954" containerID="835498e3877620b59b67c40be8711db6661b59174a027c2daea8c17c4a9dd12c" exitCode=0 Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.506121 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r647s" event={"ID":"b4f086f6-ed16-47f7-a630-f480a86f4954","Type":"ContainerDied","Data":"835498e3877620b59b67c40be8711db6661b59174a027c2daea8c17c4a9dd12c"} Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.506834 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.513808 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.912853 4805 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.913736 4805 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.913908 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.914034 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b" gracePeriod=15 Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.914076 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67" gracePeriod=15 Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.914092 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96" gracePeriod=15 Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.914220 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa" gracePeriod=15 Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.914291 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50" gracePeriod=15 Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.916532 4805 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 00:10:28 crc kubenswrapper[4805]: E1203 00:10:28.916755 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.916775 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 03 00:10:28 crc kubenswrapper[4805]: E1203 00:10:28.916789 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.916796 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 00:10:28 crc kubenswrapper[4805]: E1203 00:10:28.916809 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.916819 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 00:10:28 crc kubenswrapper[4805]: E1203 00:10:28.916830 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.916840 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 03 00:10:28 crc kubenswrapper[4805]: E1203 00:10:28.916855 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.916864 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 00:10:28 crc kubenswrapper[4805]: E1203 00:10:28.916872 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.916880 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 03 00:10:28 crc kubenswrapper[4805]: E1203 00:10:28.916891 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.916899 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.917044 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.917058 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.917075 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.917086 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.917095 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.917110 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.940417 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6d4bd77db6-xrbhb" podStartSLOduration=33.940387003 podStartE2EDuration="33.940387003s" podCreationTimestamp="2025-12-03 00:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:10:28.68756786 +0000 UTC m=+252.536530486" watchObservedRunningTime="2025-12-03 00:10:28.940387003 +0000 UTC m=+252.789349609" Dec 03 00:10:28 crc kubenswrapper[4805]: I1203 00:10:28.944184 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.104791 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.105160 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.105275 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.105303 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.105335 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.105388 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.105419 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.105450 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.207113 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.207177 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.207213 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.207241 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.207268 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.207287 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.207307 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.207312 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.207351 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.207401 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.207417 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.207447 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.207441 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.207478 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.207504 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.207506 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.257363 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.514930 4805 generic.go:334] "Generic (PLEG): container finished" podID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" containerID="50948eedfea00e37ddb7a3e1c54f1c53ada10831b6bd2294e8be0b8f2d2d1dcd" exitCode=0 Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.514985 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtztk" event={"ID":"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33","Type":"ContainerDied","Data":"50948eedfea00e37ddb7a3e1c54f1c53ada10831b6bd2294e8be0b8f2d2d1dcd"} Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.516036 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.516317 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.516607 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.516873 4805 generic.go:334] "Generic (PLEG): container finished" podID="a653b1e4-a669-4c68-abdb-99686a4b39eb" containerID="a52a42d126568d9371264ec806d4f71a860a9f07aa3c7103a74503ac5252af68" exitCode=0 Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.516957 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9bzd" event={"ID":"a653b1e4-a669-4c68-abdb-99686a4b39eb","Type":"ContainerDied","Data":"a52a42d126568d9371264ec806d4f71a860a9f07aa3c7103a74503ac5252af68"} Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.517766 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.517976 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.518214 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.518440 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.519322 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.520878 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.521538 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67" exitCode=0 Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.521573 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96" exitCode=0 Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.521588 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50" exitCode=0 Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.521598 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa" exitCode=2 Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.521620 4805 scope.go:117] "RemoveContainer" containerID="f8b63d4490c539bae52cc977ec741580bccd08451b6a2270a2487eafc59e15e2" Dec 03 00:10:29 crc kubenswrapper[4805]: E1203 00:10:29.759965 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.130:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-vtztk.187d8c101892ca55 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-vtztk,UID:2d6f0e6f-57cf-4586-8fe7-0df5145d4f33,APIVersion:v1,ResourceVersion:28025,FieldPath:spec.containers{registry-server},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:10:29.758741077 +0000 UTC m=+253.607703683,LastTimestamp:2025-12-03 00:10:29.758741077 +0000 UTC m=+253.607703683,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.844063 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Dec 03 00:10:29 crc kubenswrapper[4805]: I1203 00:10:29.844135 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Dec 03 00:10:30 crc kubenswrapper[4805]: I1203 00:10:30.531360 4805 generic.go:334] "Generic (PLEG): container finished" podID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" containerID="8d58f6744150c47988548350ce23fba24ba230478a7e2bc45ddf5cd013181c8b" exitCode=0 Dec 03 00:10:30 crc kubenswrapper[4805]: I1203 00:10:30.531450 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"85618e2d-61ea-4ef3-ab0b-205f236bd9c8","Type":"ContainerDied","Data":"8d58f6744150c47988548350ce23fba24ba230478a7e2bc45ddf5cd013181c8b"} Dec 03 00:10:30 crc kubenswrapper[4805]: I1203 00:10:30.532607 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:30 crc kubenswrapper[4805]: I1203 00:10:30.533231 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:30 crc kubenswrapper[4805]: I1203 00:10:30.533929 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:30 crc kubenswrapper[4805]: I1203 00:10:30.534548 4805 generic.go:334] "Generic (PLEG): container finished" podID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" containerID="4b97199cdde73a713d0d305f570d3e3013abbf5dbe99f57ff995359e52eb0b4b" exitCode=0 Dec 03 00:10:30 crc kubenswrapper[4805]: I1203 00:10:30.534629 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zd9dv" event={"ID":"cb20f5d6-0456-4fb8-8435-407ccfc9319f","Type":"ContainerDied","Data":"4b97199cdde73a713d0d305f570d3e3013abbf5dbe99f57ff995359e52eb0b4b"} Dec 03 00:10:30 crc kubenswrapper[4805]: I1203 00:10:30.534959 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:30 crc kubenswrapper[4805]: I1203 00:10:30.535634 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:30 crc kubenswrapper[4805]: I1203 00:10:30.536048 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:30 crc kubenswrapper[4805]: I1203 00:10:30.536473 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:30 crc kubenswrapper[4805]: I1203 00:10:30.536898 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:30 crc kubenswrapper[4805]: I1203 00:10:30.537168 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:31 crc kubenswrapper[4805]: E1203 00:10:31.135501 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:31 crc kubenswrapper[4805]: E1203 00:10:31.136401 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:31 crc kubenswrapper[4805]: E1203 00:10:31.136894 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:31 crc kubenswrapper[4805]: E1203 00:10:31.137209 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:31 crc kubenswrapper[4805]: E1203 00:10:31.137612 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:31 crc kubenswrapper[4805]: I1203 00:10:31.137687 4805 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 03 00:10:31 crc kubenswrapper[4805]: E1203 00:10:31.138255 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="200ms" Dec 03 00:10:31 crc kubenswrapper[4805]: E1203 00:10:31.340090 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="400ms" Dec 03 00:10:31 crc kubenswrapper[4805]: I1203 00:10:31.544103 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 00:10:31 crc kubenswrapper[4805]: E1203 00:10:31.741428 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="800ms" Dec 03 00:10:31 crc kubenswrapper[4805]: I1203 00:10:31.894633 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:10:31 crc kubenswrapper[4805]: I1203 00:10:31.895283 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:31 crc kubenswrapper[4805]: I1203 00:10:31.895500 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:31 crc kubenswrapper[4805]: I1203 00:10:31.895834 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:31 crc kubenswrapper[4805]: I1203 00:10:31.896172 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:31 crc kubenswrapper[4805]: I1203 00:10:31.896401 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:31 crc kubenswrapper[4805]: I1203 00:10:31.963336 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85618e2d-61ea-4ef3-ab0b-205f236bd9c8-var-lock\") pod \"85618e2d-61ea-4ef3-ab0b-205f236bd9c8\" (UID: \"85618e2d-61ea-4ef3-ab0b-205f236bd9c8\") " Dec 03 00:10:31 crc kubenswrapper[4805]: I1203 00:10:31.963461 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85618e2d-61ea-4ef3-ab0b-205f236bd9c8-var-lock" (OuterVolumeSpecName: "var-lock") pod "85618e2d-61ea-4ef3-ab0b-205f236bd9c8" (UID: "85618e2d-61ea-4ef3-ab0b-205f236bd9c8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:10:31 crc kubenswrapper[4805]: I1203 00:10:31.963810 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85618e2d-61ea-4ef3-ab0b-205f236bd9c8-kubelet-dir\") pod \"85618e2d-61ea-4ef3-ab0b-205f236bd9c8\" (UID: \"85618e2d-61ea-4ef3-ab0b-205f236bd9c8\") " Dec 03 00:10:31 crc kubenswrapper[4805]: I1203 00:10:31.963856 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85618e2d-61ea-4ef3-ab0b-205f236bd9c8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "85618e2d-61ea-4ef3-ab0b-205f236bd9c8" (UID: "85618e2d-61ea-4ef3-ab0b-205f236bd9c8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:10:31 crc kubenswrapper[4805]: I1203 00:10:31.963927 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85618e2d-61ea-4ef3-ab0b-205f236bd9c8-kube-api-access\") pod \"85618e2d-61ea-4ef3-ab0b-205f236bd9c8\" (UID: \"85618e2d-61ea-4ef3-ab0b-205f236bd9c8\") " Dec 03 00:10:31 crc kubenswrapper[4805]: I1203 00:10:31.964623 4805 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85618e2d-61ea-4ef3-ab0b-205f236bd9c8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:31 crc kubenswrapper[4805]: I1203 00:10:31.964649 4805 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85618e2d-61ea-4ef3-ab0b-205f236bd9c8-var-lock\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:31 crc kubenswrapper[4805]: I1203 00:10:31.970215 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85618e2d-61ea-4ef3-ab0b-205f236bd9c8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "85618e2d-61ea-4ef3-ab0b-205f236bd9c8" (UID: "85618e2d-61ea-4ef3-ab0b-205f236bd9c8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:10:32 crc kubenswrapper[4805]: I1203 00:10:32.066368 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85618e2d-61ea-4ef3-ab0b-205f236bd9c8-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:32 crc kubenswrapper[4805]: E1203 00:10:32.541968 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="1.6s" Dec 03 00:10:32 crc kubenswrapper[4805]: I1203 00:10:32.551959 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"85618e2d-61ea-4ef3-ab0b-205f236bd9c8","Type":"ContainerDied","Data":"240e70bde4fd6677ea68ec1edf89d0429367e928be8c2c88bbc09b1aa58c562b"} Dec 03 00:10:32 crc kubenswrapper[4805]: I1203 00:10:32.552008 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="240e70bde4fd6677ea68ec1edf89d0429367e928be8c2c88bbc09b1aa58c562b" Dec 03 00:10:32 crc kubenswrapper[4805]: I1203 00:10:32.552104 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 00:10:32 crc kubenswrapper[4805]: I1203 00:10:32.557050 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:32 crc kubenswrapper[4805]: I1203 00:10:32.557636 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:32 crc kubenswrapper[4805]: I1203 00:10:32.557909 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:32 crc kubenswrapper[4805]: I1203 00:10:32.558374 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:32 crc kubenswrapper[4805]: I1203 00:10:32.558858 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:32 crc kubenswrapper[4805]: E1203 00:10:32.788289 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:10:32Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:10:32Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:10:32Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:10:32Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[],\\\"sizeBytes\\\":1609784638},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[],\\\"sizeBytes\\\":1204220237},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:3bb6e76bb2fc875de6aae6909205aad0af8b2a476f3b7e31f64d5ae8e6659572\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:54a0b5857af1053fc62860dff0f0cb8f974ab781ba9fc5722277c34ef2a16b4e\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201277260},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:32 crc kubenswrapper[4805]: E1203 00:10:32.788794 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:32 crc kubenswrapper[4805]: E1203 00:10:32.789414 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:32 crc kubenswrapper[4805]: E1203 00:10:32.790006 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:32 crc kubenswrapper[4805]: E1203 00:10:32.790414 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:32 crc kubenswrapper[4805]: E1203 00:10:32.790436 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.020587 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-htqft" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.020655 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-htqft" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.197466 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-htqft" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.198150 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.198588 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.198878 4805 status_manager.go:851] "Failed to get status for pod" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" pod="openshift-marketplace/community-operators-htqft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-htqft\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.199151 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.199398 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.199596 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.484882 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6gm8g" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.484942 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6gm8g" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.525485 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6gm8g" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.526100 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.526530 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.526993 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.527255 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.527514 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.527832 4805 status_manager.go:851] "Failed to get status for pod" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" pod="openshift-marketplace/community-operators-htqft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-htqft\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.528092 4805 status_manager.go:851] "Failed to get status for pod" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" pod="openshift-marketplace/community-operators-6gm8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gm8g\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.563713 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.564863 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b" exitCode=0 Dec 03 00:10:33 crc kubenswrapper[4805]: E1203 00:10:33.584362 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.130:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-vtztk.187d8c101892ca55 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-vtztk,UID:2d6f0e6f-57cf-4586-8fe7-0df5145d4f33,APIVersion:v1,ResourceVersion:28025,FieldPath:spec.containers{registry-server},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:10:29.758741077 +0000 UTC m=+253.607703683,LastTimestamp:2025-12-03 00:10:29.758741077 +0000 UTC m=+253.607703683,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.616687 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6gm8g" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.617262 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-htqft" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.617577 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.618153 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.618841 4805 status_manager.go:851] "Failed to get status for pod" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" pod="openshift-marketplace/community-operators-htqft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-htqft\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.620110 4805 status_manager.go:851] "Failed to get status for pod" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" pod="openshift-marketplace/community-operators-6gm8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gm8g\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.622532 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.623144 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.623434 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.623844 4805 status_manager.go:851] "Failed to get status for pod" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" pod="openshift-marketplace/community-operators-6gm8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gm8g\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.624104 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.624543 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.625288 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.625655 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.626071 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:33 crc kubenswrapper[4805]: I1203 00:10:33.626495 4805 status_manager.go:851] "Failed to get status for pod" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" pod="openshift-marketplace/community-operators-htqft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-htqft\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:34 crc kubenswrapper[4805]: E1203 00:10:34.143305 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="3.2s" Dec 03 00:10:34 crc kubenswrapper[4805]: I1203 00:10:34.578609 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6c864909e45970754cc9f975d67c50a1f77cccf5e2f1a294ba5666dfc7fcca80"} Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.402873 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.404107 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.404798 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.405421 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.405859 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.406129 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.406408 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.406587 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.406751 4805 status_manager.go:851] "Failed to get status for pod" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" pod="openshift-marketplace/community-operators-htqft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-htqft\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.406924 4805 status_manager.go:851] "Failed to get status for pod" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" pod="openshift-marketplace/community-operators-6gm8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gm8g\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.521242 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.521354 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.521447 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.521487 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.521593 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.522026 4805 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.522051 4805 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.521548 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.587707 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"37ba7393152529aba4791f6004c0a0d6816a1f0f9ff8f661f271ad628eea8f0c"} Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.591390 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.592766 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.593417 4805 scope.go:117] "RemoveContainer" containerID="9d8c5a5c358658a64c1a69cbabeacb4c12b2a1c0e8c95be5b0ef472c0bea8e67" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.600458 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9hlqz" event={"ID":"788046c6-ddf0-4f22-bc41-260efe363420","Type":"ContainerStarted","Data":"7c2fdba6a3ca3b3fe8f9fc098769e9ea960cb303c585475d8babb3804503f769"} Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.609299 4805 status_manager.go:851] "Failed to get status for pod" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" pod="openshift-marketplace/community-operators-6gm8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gm8g\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.609570 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.609807 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.610026 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.610230 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.610430 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.610674 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.610834 4805 status_manager.go:851] "Failed to get status for pod" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" pod="openshift-marketplace/community-operators-htqft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-htqft\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:35 crc kubenswrapper[4805]: I1203 00:10:35.623290 4805 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:10:36 crc kubenswrapper[4805]: I1203 00:10:36.431087 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:36 crc kubenswrapper[4805]: I1203 00:10:36.431706 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:36 crc kubenswrapper[4805]: I1203 00:10:36.432191 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:36 crc kubenswrapper[4805]: I1203 00:10:36.432485 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:36 crc kubenswrapper[4805]: I1203 00:10:36.432704 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:36 crc kubenswrapper[4805]: I1203 00:10:36.433143 4805 status_manager.go:851] "Failed to get status for pod" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" pod="openshift-marketplace/community-operators-htqft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-htqft\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:36 crc kubenswrapper[4805]: I1203 00:10:36.434478 4805 status_manager.go:851] "Failed to get status for pod" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" pod="openshift-marketplace/community-operators-6gm8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gm8g\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:36 crc kubenswrapper[4805]: I1203 00:10:36.434985 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:36 crc kubenswrapper[4805]: I1203 00:10:36.437569 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 03 00:10:36 crc kubenswrapper[4805]: I1203 00:10:36.605562 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:36 crc kubenswrapper[4805]: I1203 00:10:36.605938 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:36 crc kubenswrapper[4805]: I1203 00:10:36.606288 4805 status_manager.go:851] "Failed to get status for pod" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" pod="openshift-marketplace/community-operators-htqft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-htqft\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:36 crc kubenswrapper[4805]: I1203 00:10:36.606638 4805 status_manager.go:851] "Failed to get status for pod" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" pod="openshift-marketplace/community-operators-6gm8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gm8g\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:36 crc kubenswrapper[4805]: I1203 00:10:36.607151 4805 status_manager.go:851] "Failed to get status for pod" podUID="788046c6-ddf0-4f22-bc41-260efe363420" pod="openshift-marketplace/redhat-marketplace-9hlqz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9hlqz\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:36 crc kubenswrapper[4805]: I1203 00:10:36.607692 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:36 crc kubenswrapper[4805]: I1203 00:10:36.608014 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:36 crc kubenswrapper[4805]: I1203 00:10:36.608265 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:37 crc kubenswrapper[4805]: I1203 00:10:37.208465 4805 scope.go:117] "RemoveContainer" containerID="a67df3551f84edf383c330c9d9f416cd44625105b3dd9b19ff50a649c22c4e96" Dec 03 00:10:37 crc kubenswrapper[4805]: E1203 00:10:37.344353 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="6.4s" Dec 03 00:10:37 crc kubenswrapper[4805]: I1203 00:10:37.613834 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 00:10:37 crc kubenswrapper[4805]: I1203 00:10:37.615690 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:37 crc kubenswrapper[4805]: I1203 00:10:37.616167 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:37 crc kubenswrapper[4805]: I1203 00:10:37.616687 4805 status_manager.go:851] "Failed to get status for pod" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" pod="openshift-marketplace/community-operators-htqft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-htqft\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:37 crc kubenswrapper[4805]: I1203 00:10:37.616999 4805 status_manager.go:851] "Failed to get status for pod" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" pod="openshift-marketplace/community-operators-6gm8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gm8g\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:37 crc kubenswrapper[4805]: I1203 00:10:37.617278 4805 status_manager.go:851] "Failed to get status for pod" podUID="788046c6-ddf0-4f22-bc41-260efe363420" pod="openshift-marketplace/redhat-marketplace-9hlqz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9hlqz\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:37 crc kubenswrapper[4805]: I1203 00:10:37.617578 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:37 crc kubenswrapper[4805]: I1203 00:10:37.617912 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:37 crc kubenswrapper[4805]: I1203 00:10:37.618212 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:40 crc kubenswrapper[4805]: I1203 00:10:40.423091 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:40 crc kubenswrapper[4805]: I1203 00:10:40.424062 4805 status_manager.go:851] "Failed to get status for pod" podUID="788046c6-ddf0-4f22-bc41-260efe363420" pod="openshift-marketplace/redhat-marketplace-9hlqz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9hlqz\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:40 crc kubenswrapper[4805]: I1203 00:10:40.425803 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:40 crc kubenswrapper[4805]: I1203 00:10:40.426423 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:40 crc kubenswrapper[4805]: I1203 00:10:40.429221 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:40 crc kubenswrapper[4805]: I1203 00:10:40.429508 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:40 crc kubenswrapper[4805]: I1203 00:10:40.429764 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:40 crc kubenswrapper[4805]: I1203 00:10:40.430179 4805 status_manager.go:851] "Failed to get status for pod" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" pod="openshift-marketplace/community-operators-htqft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-htqft\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:40 crc kubenswrapper[4805]: I1203 00:10:40.430736 4805 status_manager.go:851] "Failed to get status for pod" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" pod="openshift-marketplace/community-operators-6gm8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gm8g\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:40 crc kubenswrapper[4805]: I1203 00:10:40.440729 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9403035d-0ad7-413a-a630-a252fcafb16d" Dec 03 00:10:40 crc kubenswrapper[4805]: I1203 00:10:40.440776 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9403035d-0ad7-413a-a630-a252fcafb16d" Dec 03 00:10:40 crc kubenswrapper[4805]: E1203 00:10:40.441352 4805 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:40 crc kubenswrapper[4805]: I1203 00:10:40.442019 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:41 crc kubenswrapper[4805]: E1203 00:10:41.494341 4805 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.130:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" volumeName="registry-storage" Dec 03 00:10:42 crc kubenswrapper[4805]: E1203 00:10:42.898467 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:10:42Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:10:42Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:10:42Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T00:10:42Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[],\\\"sizeBytes\\\":1609784638},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[],\\\"sizeBytes\\\":1204220237},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:3bb6e76bb2fc875de6aae6909205aad0af8b2a476f3b7e31f64d5ae8e6659572\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:54a0b5857af1053fc62860dff0f0cb8f974ab781ba9fc5722277c34ef2a16b4e\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201277260},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:42 crc kubenswrapper[4805]: E1203 00:10:42.899424 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:42 crc kubenswrapper[4805]: E1203 00:10:42.899737 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:42 crc kubenswrapper[4805]: E1203 00:10:42.900151 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:42 crc kubenswrapper[4805]: E1203 00:10:42.900729 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:42 crc kubenswrapper[4805]: E1203 00:10:42.900773 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 00:10:43 crc kubenswrapper[4805]: E1203 00:10:43.585944 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.130:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-vtztk.187d8c101892ca55 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-vtztk,UID:2d6f0e6f-57cf-4586-8fe7-0df5145d4f33,APIVersion:v1,ResourceVersion:28025,FieldPath:spec.containers{registry-server},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 00:10:29.758741077 +0000 UTC m=+253.607703683,LastTimestamp:2025-12-03 00:10:29.758741077 +0000 UTC m=+253.607703683,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 00:10:43 crc kubenswrapper[4805]: E1203 00:10:43.745892 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="7s" Dec 03 00:10:44 crc kubenswrapper[4805]: I1203 00:10:44.680306 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 03 00:10:44 crc kubenswrapper[4805]: I1203 00:10:44.680673 4805 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a" exitCode=1 Dec 03 00:10:44 crc kubenswrapper[4805]: I1203 00:10:44.680718 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a"} Dec 03 00:10:44 crc kubenswrapper[4805]: I1203 00:10:44.681433 4805 scope.go:117] "RemoveContainer" containerID="9c8b0c23ec8e308bbe8e77d62a16ddd273480eef23f26335d831c6775fad989a" Dec 03 00:10:44 crc kubenswrapper[4805]: I1203 00:10:44.681936 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:44 crc kubenswrapper[4805]: I1203 00:10:44.682477 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:44 crc kubenswrapper[4805]: I1203 00:10:44.682799 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:44 crc kubenswrapper[4805]: I1203 00:10:44.683388 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:44 crc kubenswrapper[4805]: I1203 00:10:44.683921 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:44 crc kubenswrapper[4805]: I1203 00:10:44.684277 4805 status_manager.go:851] "Failed to get status for pod" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" pod="openshift-marketplace/community-operators-htqft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-htqft\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:44 crc kubenswrapper[4805]: I1203 00:10:44.684597 4805 status_manager.go:851] "Failed to get status for pod" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" pod="openshift-marketplace/community-operators-6gm8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gm8g\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:44 crc kubenswrapper[4805]: I1203 00:10:44.684888 4805 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:44 crc kubenswrapper[4805]: I1203 00:10:44.685239 4805 status_manager.go:851] "Failed to get status for pod" podUID="788046c6-ddf0-4f22-bc41-260efe363420" pod="openshift-marketplace/redhat-marketplace-9hlqz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9hlqz\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.200245 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9hlqz" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.200347 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9hlqz" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.241531 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9hlqz" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.242523 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.243328 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.243797 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.244289 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.244900 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.245546 4805 status_manager.go:851] "Failed to get status for pod" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" pod="openshift-marketplace/community-operators-htqft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-htqft\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.245884 4805 status_manager.go:851] "Failed to get status for pod" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" pod="openshift-marketplace/community-operators-6gm8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gm8g\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.246338 4805 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.246793 4805 status_manager.go:851] "Failed to get status for pod" podUID="788046c6-ddf0-4f22-bc41-260efe363420" pod="openshift-marketplace/redhat-marketplace-9hlqz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9hlqz\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.428812 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.429481 4805 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.430012 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.430719 4805 status_manager.go:851] "Failed to get status for pod" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" pod="openshift-marketplace/community-operators-htqft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-htqft\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.431212 4805 status_manager.go:851] "Failed to get status for pod" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" pod="openshift-marketplace/community-operators-6gm8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gm8g\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.431790 4805 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.432241 4805 status_manager.go:851] "Failed to get status for pod" podUID="788046c6-ddf0-4f22-bc41-260efe363420" pod="openshift-marketplace/redhat-marketplace-9hlqz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9hlqz\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.432578 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.432911 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.433323 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.766085 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9hlqz" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.767027 4805 status_manager.go:851] "Failed to get status for pod" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" pod="openshift-marketplace/community-operators-htqft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-htqft\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.767681 4805 status_manager.go:851] "Failed to get status for pod" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" pod="openshift-marketplace/community-operators-6gm8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gm8g\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.768253 4805 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.768654 4805 status_manager.go:851] "Failed to get status for pod" podUID="788046c6-ddf0-4f22-bc41-260efe363420" pod="openshift-marketplace/redhat-marketplace-9hlqz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9hlqz\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.769032 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.769548 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.769959 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.770240 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.770729 4805 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:46 crc kubenswrapper[4805]: I1203 00:10:46.771279 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:47 crc kubenswrapper[4805]: I1203 00:10:47.454055 4805 scope.go:117] "RemoveContainer" containerID="18d6bbf4f1fad4299dc7ad3d12c04af22399f18deadbec7e700a065481ac4d50" Dec 03 00:10:47 crc kubenswrapper[4805]: I1203 00:10:47.698567 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 00:10:49 crc kubenswrapper[4805]: I1203 00:10:49.509788 4805 scope.go:117] "RemoveContainer" containerID="4e36a6ec42f579320f1c99646f25d36ba7f5c1b78d4b71016c1c99d5d33ce9fa" Dec 03 00:10:49 crc kubenswrapper[4805]: I1203 00:10:49.590427 4805 scope.go:117] "RemoveContainer" containerID="aa8b49d5678e8f722d2aaacb7a4ce4a7c806cc4c0d6c7bd4b1ebf4cdfac51b9b" Dec 03 00:10:49 crc kubenswrapper[4805]: I1203 00:10:49.639385 4805 scope.go:117] "RemoveContainer" containerID="afcc173ad29f9587ae926233843aaddcb7a11bc7e08552eac4d23f2d0330696f" Dec 03 00:10:49 crc kubenswrapper[4805]: I1203 00:10:49.711628 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"90221dcf925c0be7f0b60a3ca3519fa357f73f14dde3d77b69fbea51b4c43850"} Dec 03 00:10:49 crc kubenswrapper[4805]: E1203 00:10:49.934572 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-86db912aaae1f30588b2329a98877eb22252ed9487dbc053416b1bbb118777f5.scope\": RecentStats: unable to find data in memory cache]" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.698049 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.721659 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r647s" event={"ID":"b4f086f6-ed16-47f7-a630-f480a86f4954","Type":"ContainerStarted","Data":"fc0f9bb92aa6f1bf26a573fc866f7afe42d647b17e963678eef0a2fda4a36d08"} Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.722702 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.722919 4805 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.723225 4805 status_manager.go:851] "Failed to get status for pod" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" pod="openshift-marketplace/community-operators-htqft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-htqft\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.723667 4805 status_manager.go:851] "Failed to get status for pod" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" pod="openshift-marketplace/community-operators-6gm8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gm8g\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.723943 4805 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.724151 4805 status_manager.go:851] "Failed to get status for pod" podUID="788046c6-ddf0-4f22-bc41-260efe363420" pod="openshift-marketplace/redhat-marketplace-9hlqz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9hlqz\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.724385 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.724385 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26vk7" event={"ID":"809c4c09-4e7d-40b5-9964-7b09c2a19ea5","Type":"ContainerStarted","Data":"f1d8c8c0050c7907e23e4b132c02d7aa60171f2717ca3c005076d5194c6067ed"} Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.724675 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.724885 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.725215 4805 status_manager.go:851] "Failed to get status for pod" podUID="b4f086f6-ed16-47f7-a630-f480a86f4954" pod="openshift-marketplace/redhat-operators-r647s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r647s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.725753 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.726123 4805 status_manager.go:851] "Failed to get status for pod" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" pod="openshift-marketplace/community-operators-6gm8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gm8g\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.726383 4805 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.726643 4805 status_manager.go:851] "Failed to get status for pod" podUID="788046c6-ddf0-4f22-bc41-260efe363420" pod="openshift-marketplace/redhat-marketplace-9hlqz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9hlqz\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.726909 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zd9dv" event={"ID":"cb20f5d6-0456-4fb8-8435-407ccfc9319f","Type":"ContainerStarted","Data":"8d73bb41f68b93a0e303373297cfcc037a5547f5ea78acdd1c3e9f3f222fa9e1"} Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.726933 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.727298 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.727549 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.727781 4805 status_manager.go:851] "Failed to get status for pod" podUID="b4f086f6-ed16-47f7-a630-f480a86f4954" pod="openshift-marketplace/redhat-operators-r647s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r647s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.728074 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.728484 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.728881 4805 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.729388 4805 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="86db912aaae1f30588b2329a98877eb22252ed9487dbc053416b1bbb118777f5" exitCode=0 Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.729428 4805 status_manager.go:851] "Failed to get status for pod" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" pod="openshift-marketplace/community-operators-htqft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-htqft\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.729536 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"86db912aaae1f30588b2329a98877eb22252ed9487dbc053416b1bbb118777f5"} Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.729656 4805 status_manager.go:851] "Failed to get status for pod" podUID="809c4c09-4e7d-40b5-9964-7b09c2a19ea5" pod="openshift-marketplace/redhat-marketplace-26vk7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-26vk7\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.729835 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9403035d-0ad7-413a-a630-a252fcafb16d" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.729852 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9403035d-0ad7-413a-a630-a252fcafb16d" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.730017 4805 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: E1203 00:10:50.730190 4805 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.730243 4805 status_manager.go:851] "Failed to get status for pod" podUID="788046c6-ddf0-4f22-bc41-260efe363420" pod="openshift-marketplace/redhat-marketplace-9hlqz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9hlqz\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.730486 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.730965 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.731250 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.731505 4805 status_manager.go:851] "Failed to get status for pod" podUID="b4f086f6-ed16-47f7-a630-f480a86f4954" pod="openshift-marketplace/redhat-operators-r647s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r647s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.731756 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.731983 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.732129 4805 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.732299 4805 status_manager.go:851] "Failed to get status for pod" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" pod="openshift-marketplace/community-operators-htqft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-htqft\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.732463 4805 status_manager.go:851] "Failed to get status for pod" podUID="809c4c09-4e7d-40b5-9964-7b09c2a19ea5" pod="openshift-marketplace/redhat-marketplace-26vk7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-26vk7\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.732624 4805 status_manager.go:851] "Failed to get status for pod" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" pod="openshift-marketplace/community-operators-6gm8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gm8g\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.737404 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.737815 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4949db274c04e6b32f2f7e42187cb109521512e305a7c31035336286b4eee902"} Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.738490 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.738836 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.739366 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.739589 4805 status_manager.go:851] "Failed to get status for pod" podUID="b4f086f6-ed16-47f7-a630-f480a86f4954" pod="openshift-marketplace/redhat-operators-r647s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r647s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.739800 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.740002 4805 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.740303 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.740678 4805 status_manager.go:851] "Failed to get status for pod" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" pod="openshift-marketplace/community-operators-htqft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-htqft\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.740991 4805 status_manager.go:851] "Failed to get status for pod" podUID="809c4c09-4e7d-40b5-9964-7b09c2a19ea5" pod="openshift-marketplace/redhat-marketplace-26vk7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-26vk7\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.741270 4805 status_manager.go:851] "Failed to get status for pod" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" pod="openshift-marketplace/community-operators-6gm8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gm8g\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.741535 4805 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.741727 4805 status_manager.go:851] "Failed to get status for pod" podUID="788046c6-ddf0-4f22-bc41-260efe363420" pod="openshift-marketplace/redhat-marketplace-9hlqz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9hlqz\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.741826 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtztk" event={"ID":"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33","Type":"ContainerStarted","Data":"419cdd6923cedafe19d5a4fe6f5ebfe6bad6fd8ff807aa79ec7ee22f4b866cb8"} Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.742446 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.742635 4805 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.742805 4805 status_manager.go:851] "Failed to get status for pod" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" pod="openshift-marketplace/community-operators-htqft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-htqft\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.743015 4805 status_manager.go:851] "Failed to get status for pod" podUID="809c4c09-4e7d-40b5-9964-7b09c2a19ea5" pod="openshift-marketplace/redhat-marketplace-26vk7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-26vk7\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.743745 4805 status_manager.go:851] "Failed to get status for pod" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" pod="openshift-marketplace/community-operators-6gm8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gm8g\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.743982 4805 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.744146 4805 status_manager.go:851] "Failed to get status for pod" podUID="788046c6-ddf0-4f22-bc41-260efe363420" pod="openshift-marketplace/redhat-marketplace-9hlqz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9hlqz\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.744156 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9bzd" event={"ID":"a653b1e4-a669-4c68-abdb-99686a4b39eb","Type":"ContainerStarted","Data":"84f136e40be97e749c6fb312598ccbf76e6a40cca20fc2177ff604361f386ca5"} Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.744315 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.744557 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.744969 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.745214 4805 status_manager.go:851] "Failed to get status for pod" podUID="b4f086f6-ed16-47f7-a630-f480a86f4954" pod="openshift-marketplace/redhat-operators-r647s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r647s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.745404 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.745972 4805 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.747524 4805 status_manager.go:851] "Failed to get status for pod" podUID="788046c6-ddf0-4f22-bc41-260efe363420" pod="openshift-marketplace/redhat-marketplace-9hlqz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9hlqz\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: E1203 00:10:50.747592 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="7s" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.747755 4805 status_manager.go:851] "Failed to get status for pod" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" pod="openshift-marketplace/certified-operators-vtztk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-vtztk\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.748069 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.748474 4805 status_manager.go:851] "Failed to get status for pod" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" pod="openshift-marketplace/certified-operators-h9bzd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-h9bzd\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.748686 4805 status_manager.go:851] "Failed to get status for pod" podUID="b4f086f6-ed16-47f7-a630-f480a86f4954" pod="openshift-marketplace/redhat-operators-r647s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r647s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.760802 4805 status_manager.go:851] "Failed to get status for pod" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.761344 4805 status_manager.go:851] "Failed to get status for pod" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" pod="openshift-marketplace/redhat-operators-zd9dv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zd9dv\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.761645 4805 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.761938 4805 status_manager.go:851] "Failed to get status for pod" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" pod="openshift-marketplace/community-operators-htqft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-htqft\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.762218 4805 status_manager.go:851] "Failed to get status for pod" podUID="809c4c09-4e7d-40b5-9964-7b09c2a19ea5" pod="openshift-marketplace/redhat-marketplace-26vk7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-26vk7\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.762577 4805 status_manager.go:851] "Failed to get status for pod" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" pod="openshift-marketplace/community-operators-6gm8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gm8g\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.870153 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.871119 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 03 00:10:50 crc kubenswrapper[4805]: I1203 00:10:50.871303 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 03 00:10:51 crc kubenswrapper[4805]: I1203 00:10:51.750764 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f0a59a1f447c1aaf0b12c6cd2eab124225201964eca1b5c034cf6c9f902b0bf6"} Dec 03 00:10:52 crc kubenswrapper[4805]: I1203 00:10:52.846352 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:10:52 crc kubenswrapper[4805]: I1203 00:10:52.862868 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vtztk" Dec 03 00:10:52 crc kubenswrapper[4805]: I1203 00:10:52.863154 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vtztk" Dec 03 00:10:52 crc kubenswrapper[4805]: I1203 00:10:52.932049 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vtztk" Dec 03 00:10:53 crc kubenswrapper[4805]: I1203 00:10:53.217353 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h9bzd" Dec 03 00:10:53 crc kubenswrapper[4805]: I1203 00:10:53.217963 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h9bzd" Dec 03 00:10:53 crc kubenswrapper[4805]: I1203 00:10:53.260135 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h9bzd" Dec 03 00:10:54 crc kubenswrapper[4805]: I1203 00:10:54.776033 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"967a2894ec70e5fbe349211fac1cd95e919e2b64e4865417786628d5560ea3e2"} Dec 03 00:10:54 crc kubenswrapper[4805]: I1203 00:10:54.776397 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f3fee824623b6b2a09f2b1a02b0c3fec9ac9fef0a5489217d3e86c25f097c51c"} Dec 03 00:10:54 crc kubenswrapper[4805]: I1203 00:10:54.776411 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d1e0ec3c6623a70136b677b4cf649af780fb048785ce0ab00f43eb222dd5b8be"} Dec 03 00:10:54 crc kubenswrapper[4805]: I1203 00:10:54.848010 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vtztk" Dec 03 00:10:55 crc kubenswrapper[4805]: I1203 00:10:55.785372 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9b47673be25eb32217f1d6b8eaffbad620144704bb741f6bd86d0867724f2974"} Dec 03 00:10:55 crc kubenswrapper[4805]: I1203 00:10:55.786798 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9403035d-0ad7-413a-a630-a252fcafb16d" Dec 03 00:10:55 crc kubenswrapper[4805]: I1203 00:10:55.786828 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9403035d-0ad7-413a-a630-a252fcafb16d" Dec 03 00:10:56 crc kubenswrapper[4805]: I1203 00:10:56.722555 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zd9dv" Dec 03 00:10:56 crc kubenswrapper[4805]: I1203 00:10:56.722967 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zd9dv" Dec 03 00:10:56 crc kubenswrapper[4805]: I1203 00:10:56.729932 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r647s" Dec 03 00:10:56 crc kubenswrapper[4805]: I1203 00:10:56.729983 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r647s" Dec 03 00:10:56 crc kubenswrapper[4805]: I1203 00:10:56.748630 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-26vk7" Dec 03 00:10:56 crc kubenswrapper[4805]: I1203 00:10:56.748695 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-26vk7" Dec 03 00:10:56 crc kubenswrapper[4805]: I1203 00:10:56.791170 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-26vk7" Dec 03 00:10:56 crc kubenswrapper[4805]: I1203 00:10:56.833914 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-26vk7" Dec 03 00:10:57 crc kubenswrapper[4805]: I1203 00:10:57.764189 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zd9dv" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" containerName="registry-server" probeResult="failure" output=< Dec 03 00:10:57 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Dec 03 00:10:57 crc kubenswrapper[4805]: > Dec 03 00:10:57 crc kubenswrapper[4805]: I1203 00:10:57.773676 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r647s" podUID="b4f086f6-ed16-47f7-a630-f480a86f4954" containerName="registry-server" probeResult="failure" output=< Dec 03 00:10:57 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Dec 03 00:10:57 crc kubenswrapper[4805]: > Dec 03 00:11:00 crc kubenswrapper[4805]: I1203 00:11:00.442256 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:11:00 crc kubenswrapper[4805]: I1203 00:11:00.442599 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:11:00 crc kubenswrapper[4805]: I1203 00:11:00.442622 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:11:00 crc kubenswrapper[4805]: I1203 00:11:00.448790 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:11:00 crc kubenswrapper[4805]: I1203 00:11:00.796049 4805 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:11:00 crc kubenswrapper[4805]: I1203 00:11:00.813105 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9403035d-0ad7-413a-a630-a252fcafb16d" Dec 03 00:11:00 crc kubenswrapper[4805]: I1203 00:11:00.813138 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9403035d-0ad7-413a-a630-a252fcafb16d" Dec 03 00:11:00 crc kubenswrapper[4805]: I1203 00:11:00.817051 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:11:00 crc kubenswrapper[4805]: I1203 00:11:00.819956 4805 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b9343c11-f9a0-4dd4-9bde-b7ece0304ad9" Dec 03 00:11:00 crc kubenswrapper[4805]: I1203 00:11:00.869721 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 03 00:11:00 crc kubenswrapper[4805]: I1203 00:11:00.869809 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 03 00:11:01 crc kubenswrapper[4805]: I1203 00:11:01.818069 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9403035d-0ad7-413a-a630-a252fcafb16d" Dec 03 00:11:01 crc kubenswrapper[4805]: I1203 00:11:01.818353 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9403035d-0ad7-413a-a630-a252fcafb16d" Dec 03 00:11:03 crc kubenswrapper[4805]: I1203 00:11:03.262106 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h9bzd" Dec 03 00:11:06 crc kubenswrapper[4805]: I1203 00:11:06.461703 4805 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b9343c11-f9a0-4dd4-9bde-b7ece0304ad9" Dec 03 00:11:06 crc kubenswrapper[4805]: I1203 00:11:06.765875 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zd9dv" Dec 03 00:11:06 crc kubenswrapper[4805]: I1203 00:11:06.775795 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r647s" Dec 03 00:11:06 crc kubenswrapper[4805]: I1203 00:11:06.808032 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zd9dv" Dec 03 00:11:06 crc kubenswrapper[4805]: I1203 00:11:06.829142 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r647s" Dec 03 00:11:10 crc kubenswrapper[4805]: I1203 00:11:10.873434 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:11:10 crc kubenswrapper[4805]: I1203 00:11:10.878510 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 00:11:13 crc kubenswrapper[4805]: I1203 00:11:13.036860 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 00:11:13 crc kubenswrapper[4805]: I1203 00:11:13.372498 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 00:11:13 crc kubenswrapper[4805]: I1203 00:11:13.489205 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 00:11:13 crc kubenswrapper[4805]: I1203 00:11:13.945629 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 00:11:14 crc kubenswrapper[4805]: I1203 00:11:14.988032 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 00:11:15 crc kubenswrapper[4805]: I1203 00:11:15.067495 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 00:11:15 crc kubenswrapper[4805]: I1203 00:11:15.343145 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 00:11:15 crc kubenswrapper[4805]: I1203 00:11:15.616930 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 00:11:15 crc kubenswrapper[4805]: I1203 00:11:15.699611 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 03 00:11:15 crc kubenswrapper[4805]: I1203 00:11:15.826518 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 00:11:16 crc kubenswrapper[4805]: I1203 00:11:16.146381 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 00:11:16 crc kubenswrapper[4805]: I1203 00:11:16.290587 4805 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Dec 03 00:11:16 crc kubenswrapper[4805]: I1203 00:11:16.479877 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 03 00:11:16 crc kubenswrapper[4805]: I1203 00:11:16.885456 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 00:11:17 crc kubenswrapper[4805]: I1203 00:11:17.053290 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 03 00:11:17 crc kubenswrapper[4805]: I1203 00:11:17.386190 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 03 00:11:17 crc kubenswrapper[4805]: I1203 00:11:17.578845 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 00:11:17 crc kubenswrapper[4805]: I1203 00:11:17.603287 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 00:11:17 crc kubenswrapper[4805]: I1203 00:11:17.726766 4805 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 00:11:17 crc kubenswrapper[4805]: I1203 00:11:17.752595 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 03 00:11:17 crc kubenswrapper[4805]: I1203 00:11:17.768552 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 00:11:18 crc kubenswrapper[4805]: I1203 00:11:18.452117 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 03 00:11:18 crc kubenswrapper[4805]: I1203 00:11:18.847501 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 03 00:11:19 crc kubenswrapper[4805]: I1203 00:11:19.587367 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 03 00:11:20 crc kubenswrapper[4805]: I1203 00:11:20.084379 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 00:11:20 crc kubenswrapper[4805]: I1203 00:11:20.267357 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 03 00:11:20 crc kubenswrapper[4805]: I1203 00:11:20.271446 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 03 00:11:20 crc kubenswrapper[4805]: I1203 00:11:20.290445 4805 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 00:11:20 crc kubenswrapper[4805]: I1203 00:11:20.445156 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 00:11:21 crc kubenswrapper[4805]: I1203 00:11:21.562098 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 00:11:22 crc kubenswrapper[4805]: I1203 00:11:22.274636 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 03 00:11:22 crc kubenswrapper[4805]: I1203 00:11:22.681980 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 00:11:23 crc kubenswrapper[4805]: I1203 00:11:23.107432 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 03 00:11:23 crc kubenswrapper[4805]: I1203 00:11:23.179338 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 00:11:23 crc kubenswrapper[4805]: I1203 00:11:23.375973 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 03 00:11:24 crc kubenswrapper[4805]: I1203 00:11:24.116450 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 00:11:24 crc kubenswrapper[4805]: I1203 00:11:24.150524 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 00:11:24 crc kubenswrapper[4805]: I1203 00:11:24.331135 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 00:11:24 crc kubenswrapper[4805]: I1203 00:11:24.964928 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 00:11:26 crc kubenswrapper[4805]: I1203 00:11:26.206296 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 03 00:11:26 crc kubenswrapper[4805]: I1203 00:11:26.997033 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 00:11:30 crc kubenswrapper[4805]: I1203 00:11:30.284080 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 03 00:11:30 crc kubenswrapper[4805]: I1203 00:11:30.369254 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 03 00:11:30 crc kubenswrapper[4805]: I1203 00:11:30.977014 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 03 00:11:31 crc kubenswrapper[4805]: I1203 00:11:31.145923 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 03 00:11:31 crc kubenswrapper[4805]: I1203 00:11:31.270956 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 00:11:31 crc kubenswrapper[4805]: I1203 00:11:31.604131 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 00:11:31 crc kubenswrapper[4805]: I1203 00:11:31.837767 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 00:11:32 crc kubenswrapper[4805]: I1203 00:11:32.046774 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 00:11:32 crc kubenswrapper[4805]: I1203 00:11:32.048852 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 03 00:11:32 crc kubenswrapper[4805]: I1203 00:11:32.568506 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 03 00:11:32 crc kubenswrapper[4805]: I1203 00:11:32.616016 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 03 00:11:33 crc kubenswrapper[4805]: I1203 00:11:33.470760 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 00:11:33 crc kubenswrapper[4805]: I1203 00:11:33.922092 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 00:11:34 crc kubenswrapper[4805]: I1203 00:11:34.032420 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 00:11:34 crc kubenswrapper[4805]: I1203 00:11:34.178266 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 03 00:11:34 crc kubenswrapper[4805]: I1203 00:11:34.219834 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 03 00:11:34 crc kubenswrapper[4805]: I1203 00:11:34.285627 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 00:11:34 crc kubenswrapper[4805]: I1203 00:11:34.529718 4805 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 00:11:34 crc kubenswrapper[4805]: I1203 00:11:34.530029 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h9bzd" podStartSLOduration=51.504823116 podStartE2EDuration="2m42.530010882s" podCreationTimestamp="2025-12-03 00:08:52 +0000 UTC" firstStartedPulling="2025-12-03 00:08:58.484938487 +0000 UTC m=+162.333901093" lastFinishedPulling="2025-12-03 00:10:49.510126253 +0000 UTC m=+273.359088859" observedRunningTime="2025-12-03 00:11:00.187996751 +0000 UTC m=+284.036959377" watchObservedRunningTime="2025-12-03 00:11:34.530010882 +0000 UTC m=+318.378973488" Dec 03 00:11:34 crc kubenswrapper[4805]: I1203 00:11:34.530189 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r647s" podStartSLOduration=50.92117837 podStartE2EDuration="2m39.530181587s" podCreationTimestamp="2025-12-03 00:08:55 +0000 UTC" firstStartedPulling="2025-12-03 00:09:00.901316131 +0000 UTC m=+164.750278737" lastFinishedPulling="2025-12-03 00:10:49.510319348 +0000 UTC m=+273.359281954" observedRunningTime="2025-12-03 00:11:00.207323469 +0000 UTC m=+284.056286095" watchObservedRunningTime="2025-12-03 00:11:34.530181587 +0000 UTC m=+318.379144213" Dec 03 00:11:34 crc kubenswrapper[4805]: I1203 00:11:34.532900 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vtztk" podStartSLOduration=52.476551922 podStartE2EDuration="2m42.532891674s" podCreationTimestamp="2025-12-03 00:08:52 +0000 UTC" firstStartedPulling="2025-12-03 00:08:59.421936727 +0000 UTC m=+163.270899333" lastFinishedPulling="2025-12-03 00:10:49.478276489 +0000 UTC m=+273.327239085" observedRunningTime="2025-12-03 00:11:00.122482026 +0000 UTC m=+283.971444632" watchObservedRunningTime="2025-12-03 00:11:34.532891674 +0000 UTC m=+318.381854280" Dec 03 00:11:34 crc kubenswrapper[4805]: I1203 00:11:34.534269 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=66.534262149 podStartE2EDuration="1m6.534262149s" podCreationTimestamp="2025-12-03 00:10:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:11:00.144375269 +0000 UTC m=+283.993337875" watchObservedRunningTime="2025-12-03 00:11:34.534262149 +0000 UTC m=+318.383224765" Dec 03 00:11:34 crc kubenswrapper[4805]: I1203 00:11:34.534732 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-26vk7" podStartSLOduration=53.486903815 podStartE2EDuration="2m39.534725981s" podCreationTimestamp="2025-12-03 00:08:55 +0000 UTC" firstStartedPulling="2025-12-03 00:09:00.72821373 +0000 UTC m=+164.577176326" lastFinishedPulling="2025-12-03 00:10:46.776035886 +0000 UTC m=+270.624998492" observedRunningTime="2025-12-03 00:11:00.307062927 +0000 UTC m=+284.156025543" watchObservedRunningTime="2025-12-03 00:11:34.534725981 +0000 UTC m=+318.383688597" Dec 03 00:11:34 crc kubenswrapper[4805]: I1203 00:11:34.535147 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zd9dv" podStartSLOduration=50.810597638 podStartE2EDuration="2m39.535143492s" podCreationTimestamp="2025-12-03 00:08:55 +0000 UTC" firstStartedPulling="2025-12-03 00:09:00.785738443 +0000 UTC m=+164.634701049" lastFinishedPulling="2025-12-03 00:10:49.510284297 +0000 UTC m=+273.359246903" observedRunningTime="2025-12-03 00:11:00.271745385 +0000 UTC m=+284.120707991" watchObservedRunningTime="2025-12-03 00:11:34.535143492 +0000 UTC m=+318.384106098" Dec 03 00:11:34 crc kubenswrapper[4805]: I1203 00:11:34.535429 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9hlqz" podStartSLOduration=66.776304315 podStartE2EDuration="2m39.535424959s" podCreationTimestamp="2025-12-03 00:08:55 +0000 UTC" firstStartedPulling="2025-12-03 00:09:00.826667886 +0000 UTC m=+164.675630492" lastFinishedPulling="2025-12-03 00:10:33.58578853 +0000 UTC m=+257.434751136" observedRunningTime="2025-12-03 00:11:00.102553233 +0000 UTC m=+283.951515849" watchObservedRunningTime="2025-12-03 00:11:34.535424959 +0000 UTC m=+318.384387565" Dec 03 00:11:34 crc kubenswrapper[4805]: I1203 00:11:34.536014 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 00:11:34 crc kubenswrapper[4805]: I1203 00:11:34.536055 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 00:11:34 crc kubenswrapper[4805]: I1203 00:11:34.540097 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 00:11:34 crc kubenswrapper[4805]: I1203 00:11:34.556133 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=34.556114961 podStartE2EDuration="34.556114961s" podCreationTimestamp="2025-12-03 00:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:11:34.555752012 +0000 UTC m=+318.404714618" watchObservedRunningTime="2025-12-03 00:11:34.556114961 +0000 UTC m=+318.405077577" Dec 03 00:11:34 crc kubenswrapper[4805]: I1203 00:11:34.798435 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 03 00:11:34 crc kubenswrapper[4805]: I1203 00:11:34.884335 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 00:11:34 crc kubenswrapper[4805]: I1203 00:11:34.895185 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 03 00:11:35 crc kubenswrapper[4805]: I1203 00:11:35.138750 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 00:11:35 crc kubenswrapper[4805]: I1203 00:11:35.160543 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 03 00:11:35 crc kubenswrapper[4805]: I1203 00:11:35.949957 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 00:11:36 crc kubenswrapper[4805]: I1203 00:11:36.165074 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 03 00:11:36 crc kubenswrapper[4805]: I1203 00:11:36.322033 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 00:11:36 crc kubenswrapper[4805]: I1203 00:11:36.419081 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 00:11:36 crc kubenswrapper[4805]: I1203 00:11:36.722920 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 03 00:11:36 crc kubenswrapper[4805]: I1203 00:11:36.732561 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 03 00:11:37 crc kubenswrapper[4805]: I1203 00:11:37.590235 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 00:11:37 crc kubenswrapper[4805]: I1203 00:11:37.611752 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 00:11:37 crc kubenswrapper[4805]: I1203 00:11:37.845950 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 00:11:37 crc kubenswrapper[4805]: I1203 00:11:37.880043 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 00:11:37 crc kubenswrapper[4805]: I1203 00:11:37.938110 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 03 00:11:38 crc kubenswrapper[4805]: I1203 00:11:38.094493 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 03 00:11:38 crc kubenswrapper[4805]: I1203 00:11:38.403718 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 00:11:38 crc kubenswrapper[4805]: I1203 00:11:38.919482 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 00:11:39 crc kubenswrapper[4805]: I1203 00:11:39.407916 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 00:11:39 crc kubenswrapper[4805]: I1203 00:11:39.408809 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 00:11:39 crc kubenswrapper[4805]: I1203 00:11:39.488389 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 00:11:39 crc kubenswrapper[4805]: I1203 00:11:39.508458 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 00:11:39 crc kubenswrapper[4805]: I1203 00:11:39.599696 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 00:11:39 crc kubenswrapper[4805]: I1203 00:11:39.601972 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 00:11:39 crc kubenswrapper[4805]: I1203 00:11:39.905619 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 03 00:11:39 crc kubenswrapper[4805]: I1203 00:11:39.941613 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 00:11:40 crc kubenswrapper[4805]: I1203 00:11:40.003622 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 03 00:11:40 crc kubenswrapper[4805]: I1203 00:11:40.215567 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 00:11:40 crc kubenswrapper[4805]: I1203 00:11:40.240333 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 03 00:11:40 crc kubenswrapper[4805]: I1203 00:11:40.492085 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 03 00:11:40 crc kubenswrapper[4805]: I1203 00:11:40.710888 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 00:11:40 crc kubenswrapper[4805]: I1203 00:11:40.719056 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 00:11:41 crc kubenswrapper[4805]: I1203 00:11:41.105099 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 03 00:11:41 crc kubenswrapper[4805]: I1203 00:11:41.183028 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 00:11:41 crc kubenswrapper[4805]: I1203 00:11:41.187101 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 00:11:41 crc kubenswrapper[4805]: I1203 00:11:41.189035 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 03 00:11:41 crc kubenswrapper[4805]: I1203 00:11:41.288831 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 03 00:11:41 crc kubenswrapper[4805]: I1203 00:11:41.586979 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 00:11:41 crc kubenswrapper[4805]: I1203 00:11:41.614814 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 00:11:41 crc kubenswrapper[4805]: I1203 00:11:41.621031 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 03 00:11:41 crc kubenswrapper[4805]: I1203 00:11:41.721875 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 00:11:41 crc kubenswrapper[4805]: I1203 00:11:41.956792 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 00:11:42 crc kubenswrapper[4805]: I1203 00:11:42.006832 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 00:11:42 crc kubenswrapper[4805]: I1203 00:11:42.094492 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 00:11:42 crc kubenswrapper[4805]: I1203 00:11:42.159277 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 00:11:42 crc kubenswrapper[4805]: I1203 00:11:42.466794 4805 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 00:11:42 crc kubenswrapper[4805]: I1203 00:11:42.606914 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 00:11:42 crc kubenswrapper[4805]: I1203 00:11:42.630733 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 00:11:42 crc kubenswrapper[4805]: I1203 00:11:42.681584 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 00:11:42 crc kubenswrapper[4805]: I1203 00:11:42.691060 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 00:11:42 crc kubenswrapper[4805]: I1203 00:11:42.783324 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 00:11:42 crc kubenswrapper[4805]: I1203 00:11:42.806862 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 03 00:11:42 crc kubenswrapper[4805]: I1203 00:11:42.894933 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 00:11:43 crc kubenswrapper[4805]: I1203 00:11:43.038129 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 00:11:43 crc kubenswrapper[4805]: I1203 00:11:43.154987 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 00:11:43 crc kubenswrapper[4805]: I1203 00:11:43.297883 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 00:11:43 crc kubenswrapper[4805]: I1203 00:11:43.344525 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 00:11:43 crc kubenswrapper[4805]: I1203 00:11:43.515431 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 00:11:43 crc kubenswrapper[4805]: I1203 00:11:43.525799 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 00:11:43 crc kubenswrapper[4805]: I1203 00:11:43.553557 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 00:11:43 crc kubenswrapper[4805]: I1203 00:11:43.557892 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 00:11:43 crc kubenswrapper[4805]: I1203 00:11:43.674243 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 03 00:11:43 crc kubenswrapper[4805]: I1203 00:11:43.858145 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 03 00:11:44 crc kubenswrapper[4805]: I1203 00:11:44.272448 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 00:11:44 crc kubenswrapper[4805]: I1203 00:11:44.288274 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 00:11:44 crc kubenswrapper[4805]: I1203 00:11:44.348685 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 03 00:11:44 crc kubenswrapper[4805]: I1203 00:11:44.448508 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 03 00:11:44 crc kubenswrapper[4805]: I1203 00:11:44.468446 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 03 00:11:44 crc kubenswrapper[4805]: I1203 00:11:44.520903 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 00:11:44 crc kubenswrapper[4805]: I1203 00:11:44.573189 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 00:11:44 crc kubenswrapper[4805]: I1203 00:11:44.613979 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 00:11:44 crc kubenswrapper[4805]: I1203 00:11:44.614029 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 00:11:44 crc kubenswrapper[4805]: I1203 00:11:44.725077 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 00:11:44 crc kubenswrapper[4805]: I1203 00:11:44.804724 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 03 00:11:45 crc kubenswrapper[4805]: I1203 00:11:45.016836 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 03 00:11:45 crc kubenswrapper[4805]: I1203 00:11:45.101386 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 00:11:45 crc kubenswrapper[4805]: I1203 00:11:45.109262 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 00:11:45 crc kubenswrapper[4805]: I1203 00:11:45.148291 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 00:11:45 crc kubenswrapper[4805]: I1203 00:11:45.210047 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 00:11:45 crc kubenswrapper[4805]: I1203 00:11:45.369371 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 00:11:45 crc kubenswrapper[4805]: I1203 00:11:45.369429 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 03 00:11:45 crc kubenswrapper[4805]: I1203 00:11:45.409541 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 00:11:45 crc kubenswrapper[4805]: I1203 00:11:45.516777 4805 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 00:11:45 crc kubenswrapper[4805]: I1203 00:11:45.517079 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://37ba7393152529aba4791f6004c0a0d6816a1f0f9ff8f661f271ad628eea8f0c" gracePeriod=5 Dec 03 00:11:45 crc kubenswrapper[4805]: I1203 00:11:45.713560 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 00:11:45 crc kubenswrapper[4805]: I1203 00:11:45.753312 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 03 00:11:45 crc kubenswrapper[4805]: I1203 00:11:45.906965 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 03 00:11:45 crc kubenswrapper[4805]: I1203 00:11:45.923446 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 00:11:46 crc kubenswrapper[4805]: I1203 00:11:46.327435 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 03 00:11:46 crc kubenswrapper[4805]: I1203 00:11:46.389160 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 00:11:46 crc kubenswrapper[4805]: I1203 00:11:46.508066 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 03 00:11:46 crc kubenswrapper[4805]: I1203 00:11:46.605404 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 00:11:46 crc kubenswrapper[4805]: I1203 00:11:46.651849 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 00:11:46 crc kubenswrapper[4805]: I1203 00:11:46.771577 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 03 00:11:46 crc kubenswrapper[4805]: I1203 00:11:46.951993 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 00:11:46 crc kubenswrapper[4805]: I1203 00:11:46.981893 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 00:11:47 crc kubenswrapper[4805]: I1203 00:11:47.184600 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 03 00:11:47 crc kubenswrapper[4805]: I1203 00:11:47.188892 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 00:11:47 crc kubenswrapper[4805]: I1203 00:11:47.207255 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 00:11:47 crc kubenswrapper[4805]: I1203 00:11:47.280305 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 03 00:11:47 crc kubenswrapper[4805]: I1203 00:11:47.477759 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 00:11:47 crc kubenswrapper[4805]: I1203 00:11:47.545248 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 00:11:47 crc kubenswrapper[4805]: I1203 00:11:47.663302 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 00:11:47 crc kubenswrapper[4805]: I1203 00:11:47.673984 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 00:11:47 crc kubenswrapper[4805]: I1203 00:11:47.731532 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 03 00:11:47 crc kubenswrapper[4805]: I1203 00:11:47.743226 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 00:11:47 crc kubenswrapper[4805]: I1203 00:11:47.763238 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 00:11:47 crc kubenswrapper[4805]: I1203 00:11:47.793327 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 00:11:47 crc kubenswrapper[4805]: I1203 00:11:47.820033 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 00:11:47 crc kubenswrapper[4805]: I1203 00:11:47.860405 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 03 00:11:47 crc kubenswrapper[4805]: I1203 00:11:47.920144 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 00:11:47 crc kubenswrapper[4805]: I1203 00:11:47.947448 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 03 00:11:48 crc kubenswrapper[4805]: I1203 00:11:48.040462 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 03 00:11:48 crc kubenswrapper[4805]: I1203 00:11:48.058896 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 00:11:48 crc kubenswrapper[4805]: I1203 00:11:48.142653 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 03 00:11:48 crc kubenswrapper[4805]: I1203 00:11:48.601909 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 03 00:11:48 crc kubenswrapper[4805]: I1203 00:11:48.624680 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 00:11:48 crc kubenswrapper[4805]: I1203 00:11:48.642573 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 00:11:48 crc kubenswrapper[4805]: I1203 00:11:48.704905 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 03 00:11:49 crc kubenswrapper[4805]: I1203 00:11:49.435172 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 00:11:49 crc kubenswrapper[4805]: I1203 00:11:49.479336 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 00:11:49 crc kubenswrapper[4805]: I1203 00:11:49.664284 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 03 00:11:49 crc kubenswrapper[4805]: I1203 00:11:49.751032 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 00:11:49 crc kubenswrapper[4805]: I1203 00:11:49.808476 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 00:11:49 crc kubenswrapper[4805]: I1203 00:11:49.842566 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.223682 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.232819 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.461359 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.617714 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.646993 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.647072 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.681086 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.787085 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.787167 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.787260 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.787272 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.787315 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.787350 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.787390 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.787373 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.787507 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.788119 4805 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.788153 4805 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.788172 4805 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.788191 4805 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.795583 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.864620 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.888619 4805 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 00:11:50 crc kubenswrapper[4805]: I1203 00:11:50.955121 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 03 00:11:51 crc kubenswrapper[4805]: I1203 00:11:51.136891 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 03 00:11:51 crc kubenswrapper[4805]: I1203 00:11:51.137185 4805 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="37ba7393152529aba4791f6004c0a0d6816a1f0f9ff8f661f271ad628eea8f0c" exitCode=137 Dec 03 00:11:51 crc kubenswrapper[4805]: I1203 00:11:51.137424 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 00:11:51 crc kubenswrapper[4805]: I1203 00:11:51.137446 4805 scope.go:117] "RemoveContainer" containerID="37ba7393152529aba4791f6004c0a0d6816a1f0f9ff8f661f271ad628eea8f0c" Dec 03 00:11:51 crc kubenswrapper[4805]: I1203 00:11:51.164691 4805 scope.go:117] "RemoveContainer" containerID="37ba7393152529aba4791f6004c0a0d6816a1f0f9ff8f661f271ad628eea8f0c" Dec 03 00:11:51 crc kubenswrapper[4805]: E1203 00:11:51.165479 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37ba7393152529aba4791f6004c0a0d6816a1f0f9ff8f661f271ad628eea8f0c\": container with ID starting with 37ba7393152529aba4791f6004c0a0d6816a1f0f9ff8f661f271ad628eea8f0c not found: ID does not exist" containerID="37ba7393152529aba4791f6004c0a0d6816a1f0f9ff8f661f271ad628eea8f0c" Dec 03 00:11:51 crc kubenswrapper[4805]: I1203 00:11:51.165547 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37ba7393152529aba4791f6004c0a0d6816a1f0f9ff8f661f271ad628eea8f0c"} err="failed to get container status \"37ba7393152529aba4791f6004c0a0d6816a1f0f9ff8f661f271ad628eea8f0c\": rpc error: code = NotFound desc = could not find container \"37ba7393152529aba4791f6004c0a0d6816a1f0f9ff8f661f271ad628eea8f0c\": container with ID starting with 37ba7393152529aba4791f6004c0a0d6816a1f0f9ff8f661f271ad628eea8f0c not found: ID does not exist" Dec 03 00:11:51 crc kubenswrapper[4805]: I1203 00:11:51.344150 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 00:11:51 crc kubenswrapper[4805]: I1203 00:11:51.635673 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 00:11:51 crc kubenswrapper[4805]: I1203 00:11:51.706739 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 00:11:51 crc kubenswrapper[4805]: I1203 00:11:51.819561 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 00:11:51 crc kubenswrapper[4805]: I1203 00:11:51.909050 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 03 00:11:52 crc kubenswrapper[4805]: I1203 00:11:52.115800 4805 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 03 00:11:52 crc kubenswrapper[4805]: I1203 00:11:52.189986 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 00:11:52 crc kubenswrapper[4805]: I1203 00:11:52.332418 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 03 00:11:52 crc kubenswrapper[4805]: I1203 00:11:52.358676 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 03 00:11:52 crc kubenswrapper[4805]: I1203 00:11:52.439548 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 03 00:11:52 crc kubenswrapper[4805]: I1203 00:11:52.440477 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 03 00:11:52 crc kubenswrapper[4805]: I1203 00:11:52.457150 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 00:11:52 crc kubenswrapper[4805]: I1203 00:11:52.457214 4805 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="d45b20f4-98bc-4e63-b0a1-8115643c05f2" Dec 03 00:11:52 crc kubenswrapper[4805]: I1203 00:11:52.463856 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 00:11:52 crc kubenswrapper[4805]: I1203 00:11:52.463906 4805 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="d45b20f4-98bc-4e63-b0a1-8115643c05f2" Dec 03 00:11:52 crc kubenswrapper[4805]: I1203 00:11:52.535430 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 00:11:52 crc kubenswrapper[4805]: I1203 00:11:52.542131 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 00:11:52 crc kubenswrapper[4805]: I1203 00:11:52.723788 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 00:11:52 crc kubenswrapper[4805]: I1203 00:11:52.822943 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 00:11:52 crc kubenswrapper[4805]: I1203 00:11:52.835612 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 00:11:53 crc kubenswrapper[4805]: I1203 00:11:53.104844 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 00:11:53 crc kubenswrapper[4805]: I1203 00:11:53.293916 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 03 00:11:53 crc kubenswrapper[4805]: I1203 00:11:53.477251 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 00:11:53 crc kubenswrapper[4805]: I1203 00:11:53.483850 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 03 00:11:53 crc kubenswrapper[4805]: I1203 00:11:53.529723 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 03 00:11:53 crc kubenswrapper[4805]: I1203 00:11:53.586439 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 03 00:11:53 crc kubenswrapper[4805]: I1203 00:11:53.625915 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 03 00:11:53 crc kubenswrapper[4805]: I1203 00:11:53.647477 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 00:11:53 crc kubenswrapper[4805]: I1203 00:11:53.745451 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 03 00:11:53 crc kubenswrapper[4805]: I1203 00:11:53.747570 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 00:11:53 crc kubenswrapper[4805]: I1203 00:11:53.752831 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 00:11:53 crc kubenswrapper[4805]: I1203 00:11:53.787271 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 00:11:53 crc kubenswrapper[4805]: I1203 00:11:53.829186 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 00:11:53 crc kubenswrapper[4805]: I1203 00:11:53.933661 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 00:11:53 crc kubenswrapper[4805]: I1203 00:11:53.959623 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 00:11:53 crc kubenswrapper[4805]: I1203 00:11:53.992067 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 00:11:54 crc kubenswrapper[4805]: I1203 00:11:54.012515 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 00:11:54 crc kubenswrapper[4805]: I1203 00:11:54.013064 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 00:11:54 crc kubenswrapper[4805]: I1203 00:11:54.058733 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 00:11:54 crc kubenswrapper[4805]: I1203 00:11:54.466523 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 00:11:55 crc kubenswrapper[4805]: I1203 00:11:55.290418 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 03 00:11:55 crc kubenswrapper[4805]: I1203 00:11:55.330655 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 03 00:11:55 crc kubenswrapper[4805]: I1203 00:11:55.439504 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 00:11:55 crc kubenswrapper[4805]: I1203 00:11:55.506298 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 03 00:11:56 crc kubenswrapper[4805]: I1203 00:11:56.428245 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 00:11:56 crc kubenswrapper[4805]: I1203 00:11:56.823275 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 00:11:56 crc kubenswrapper[4805]: I1203 00:11:56.854962 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 00:11:56 crc kubenswrapper[4805]: I1203 00:11:56.879157 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 00:11:57 crc kubenswrapper[4805]: I1203 00:11:57.101278 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 03 00:11:57 crc kubenswrapper[4805]: I1203 00:11:57.177054 4805 generic.go:334] "Generic (PLEG): container finished" podID="61cfa86d-9594-400f-991c-4819838ee49d" containerID="978de6011fd828ff799cf647094509777fe4309c49daa9819685c7dbac9b2e74" exitCode=0 Dec 03 00:11:57 crc kubenswrapper[4805]: I1203 00:11:57.177100 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" event={"ID":"61cfa86d-9594-400f-991c-4819838ee49d","Type":"ContainerDied","Data":"978de6011fd828ff799cf647094509777fe4309c49daa9819685c7dbac9b2e74"} Dec 03 00:11:57 crc kubenswrapper[4805]: I1203 00:11:57.177502 4805 scope.go:117] "RemoveContainer" containerID="978de6011fd828ff799cf647094509777fe4309c49daa9819685c7dbac9b2e74" Dec 03 00:11:57 crc kubenswrapper[4805]: I1203 00:11:57.180471 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 00:11:57 crc kubenswrapper[4805]: I1203 00:11:57.498324 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 00:11:57 crc kubenswrapper[4805]: I1203 00:11:57.838273 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 00:11:58 crc kubenswrapper[4805]: I1203 00:11:58.134583 4805 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 00:11:58 crc kubenswrapper[4805]: I1203 00:11:58.185080 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" event={"ID":"61cfa86d-9594-400f-991c-4819838ee49d","Type":"ContainerStarted","Data":"d7245fe6bd32249972d2c1d392a4f51b4f5ebf9e70be8377cc24ae3ecb550072"} Dec 03 00:11:58 crc kubenswrapper[4805]: I1203 00:11:58.185579 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" Dec 03 00:11:58 crc kubenswrapper[4805]: I1203 00:11:58.188252 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" Dec 03 00:11:58 crc kubenswrapper[4805]: I1203 00:11:58.430084 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 00:11:58 crc kubenswrapper[4805]: I1203 00:11:58.436104 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 03 00:11:59 crc kubenswrapper[4805]: I1203 00:11:59.118219 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 03 00:11:59 crc kubenswrapper[4805]: I1203 00:11:59.468027 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 00:11:59 crc kubenswrapper[4805]: I1203 00:11:59.573124 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 03 00:11:59 crc kubenswrapper[4805]: I1203 00:11:59.657763 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 00:12:00 crc kubenswrapper[4805]: I1203 00:12:00.002491 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 00:12:00 crc kubenswrapper[4805]: I1203 00:12:00.494056 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 03 00:12:00 crc kubenswrapper[4805]: I1203 00:12:00.721075 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 03 00:12:01 crc kubenswrapper[4805]: I1203 00:12:01.423917 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 03 00:12:01 crc kubenswrapper[4805]: I1203 00:12:01.626906 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 03 00:12:01 crc kubenswrapper[4805]: I1203 00:12:01.774751 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 00:12:03 crc kubenswrapper[4805]: I1203 00:12:03.754823 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 00:12:04 crc kubenswrapper[4805]: I1203 00:12:04.541990 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 00:12:04 crc kubenswrapper[4805]: I1203 00:12:04.893333 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 00:12:05 crc kubenswrapper[4805]: I1203 00:12:05.005112 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 00:12:17 crc kubenswrapper[4805]: I1203 00:12:17.811474 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:12:17 crc kubenswrapper[4805]: I1203 00:12:17.812101 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:12:21 crc kubenswrapper[4805]: I1203 00:12:21.544926 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k8744"] Dec 03 00:12:21 crc kubenswrapper[4805]: I1203 00:12:21.545301 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" podUID="ea247096-f6e0-490e-8fdd-3d6b6ce7a787" containerName="controller-manager" containerID="cri-o://d8a4827fd15ca32907e9a0b422f14dee422d1fa444b568bfcc1b4f4aff8be17b" gracePeriod=30 Dec 03 00:12:21 crc kubenswrapper[4805]: I1203 00:12:21.641814 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8"] Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.396573 4805 generic.go:334] "Generic (PLEG): container finished" podID="ea247096-f6e0-490e-8fdd-3d6b6ce7a787" containerID="d8a4827fd15ca32907e9a0b422f14dee422d1fa444b568bfcc1b4f4aff8be17b" exitCode=0 Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.396697 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" event={"ID":"ea247096-f6e0-490e-8fdd-3d6b6ce7a787","Type":"ContainerDied","Data":"d8a4827fd15ca32907e9a0b422f14dee422d1fa444b568bfcc1b4f4aff8be17b"} Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.396758 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" event={"ID":"ea247096-f6e0-490e-8fdd-3d6b6ce7a787","Type":"ContainerDied","Data":"c9f0f0c69c667f5ce4d98e8e47c5fcded675f884e94b57286e7a5705bc25ba93"} Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.396773 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" podUID="976c4d52-a36d-43f0-ae70-921f30051080" containerName="route-controller-manager" containerID="cri-o://fefcc9ea32b86f6445e57bc3d8633adaa8037d0a4d289d6f031fbc076fceb4e7" gracePeriod=30 Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.396783 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9f0f0c69c667f5ce4d98e8e47c5fcded675f884e94b57286e7a5705bc25ba93" Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.414474 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.584176 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-client-ca\") pod \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\" (UID: \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\") " Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.584350 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-config\") pod \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\" (UID: \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\") " Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.584388 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-serving-cert\") pod \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\" (UID: \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\") " Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.584452 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bctj6\" (UniqueName: \"kubernetes.io/projected/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-kube-api-access-bctj6\") pod \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\" (UID: \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\") " Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.584492 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-proxy-ca-bundles\") pod \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\" (UID: \"ea247096-f6e0-490e-8fdd-3d6b6ce7a787\") " Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.586706 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-config" (OuterVolumeSpecName: "config") pod "ea247096-f6e0-490e-8fdd-3d6b6ce7a787" (UID: "ea247096-f6e0-490e-8fdd-3d6b6ce7a787"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.586770 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ea247096-f6e0-490e-8fdd-3d6b6ce7a787" (UID: "ea247096-f6e0-490e-8fdd-3d6b6ce7a787"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.587400 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-client-ca" (OuterVolumeSpecName: "client-ca") pod "ea247096-f6e0-490e-8fdd-3d6b6ce7a787" (UID: "ea247096-f6e0-490e-8fdd-3d6b6ce7a787"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.594581 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-kube-api-access-bctj6" (OuterVolumeSpecName: "kube-api-access-bctj6") pod "ea247096-f6e0-490e-8fdd-3d6b6ce7a787" (UID: "ea247096-f6e0-490e-8fdd-3d6b6ce7a787"). InnerVolumeSpecName "kube-api-access-bctj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.597929 4805 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-bfzn8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.598001 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" podUID="976c4d52-a36d-43f0-ae70-921f30051080" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.598267 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ea247096-f6e0-490e-8fdd-3d6b6ce7a787" (UID: "ea247096-f6e0-490e-8fdd-3d6b6ce7a787"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.685514 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.685554 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.685566 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.685576 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bctj6\" (UniqueName: \"kubernetes.io/projected/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-kube-api-access-bctj6\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.685587 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea247096-f6e0-490e-8fdd-3d6b6ce7a787-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.751258 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.887098 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-685j2\" (UniqueName: \"kubernetes.io/projected/976c4d52-a36d-43f0-ae70-921f30051080-kube-api-access-685j2\") pod \"976c4d52-a36d-43f0-ae70-921f30051080\" (UID: \"976c4d52-a36d-43f0-ae70-921f30051080\") " Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.887241 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/976c4d52-a36d-43f0-ae70-921f30051080-config\") pod \"976c4d52-a36d-43f0-ae70-921f30051080\" (UID: \"976c4d52-a36d-43f0-ae70-921f30051080\") " Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.887349 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/976c4d52-a36d-43f0-ae70-921f30051080-serving-cert\") pod \"976c4d52-a36d-43f0-ae70-921f30051080\" (UID: \"976c4d52-a36d-43f0-ae70-921f30051080\") " Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.887394 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/976c4d52-a36d-43f0-ae70-921f30051080-client-ca\") pod \"976c4d52-a36d-43f0-ae70-921f30051080\" (UID: \"976c4d52-a36d-43f0-ae70-921f30051080\") " Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.888166 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/976c4d52-a36d-43f0-ae70-921f30051080-config" (OuterVolumeSpecName: "config") pod "976c4d52-a36d-43f0-ae70-921f30051080" (UID: "976c4d52-a36d-43f0-ae70-921f30051080"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.888891 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/976c4d52-a36d-43f0-ae70-921f30051080-client-ca" (OuterVolumeSpecName: "client-ca") pod "976c4d52-a36d-43f0-ae70-921f30051080" (UID: "976c4d52-a36d-43f0-ae70-921f30051080"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.892036 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/976c4d52-a36d-43f0-ae70-921f30051080-kube-api-access-685j2" (OuterVolumeSpecName: "kube-api-access-685j2") pod "976c4d52-a36d-43f0-ae70-921f30051080" (UID: "976c4d52-a36d-43f0-ae70-921f30051080"). InnerVolumeSpecName "kube-api-access-685j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.892697 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/976c4d52-a36d-43f0-ae70-921f30051080-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "976c4d52-a36d-43f0-ae70-921f30051080" (UID: "976c4d52-a36d-43f0-ae70-921f30051080"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.989149 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-685j2\" (UniqueName: \"kubernetes.io/projected/976c4d52-a36d-43f0-ae70-921f30051080-kube-api-access-685j2\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.989710 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/976c4d52-a36d-43f0-ae70-921f30051080-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.989813 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/976c4d52-a36d-43f0-ae70-921f30051080-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:22 crc kubenswrapper[4805]: I1203 00:12:22.989832 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/976c4d52-a36d-43f0-ae70-921f30051080-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.205822 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9bfcff46-t99w7"] Dec 03 00:12:23 crc kubenswrapper[4805]: E1203 00:12:23.206314 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.206340 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 03 00:12:23 crc kubenswrapper[4805]: E1203 00:12:23.206362 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="976c4d52-a36d-43f0-ae70-921f30051080" containerName="route-controller-manager" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.206370 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="976c4d52-a36d-43f0-ae70-921f30051080" containerName="route-controller-manager" Dec 03 00:12:23 crc kubenswrapper[4805]: E1203 00:12:23.206386 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" containerName="installer" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.206395 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" containerName="installer" Dec 03 00:12:23 crc kubenswrapper[4805]: E1203 00:12:23.206411 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea247096-f6e0-490e-8fdd-3d6b6ce7a787" containerName="controller-manager" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.206419 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea247096-f6e0-490e-8fdd-3d6b6ce7a787" containerName="controller-manager" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.206522 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea247096-f6e0-490e-8fdd-3d6b6ce7a787" containerName="controller-manager" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.206537 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.206547 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="976c4d52-a36d-43f0-ae70-921f30051080" containerName="route-controller-manager" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.206554 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="85618e2d-61ea-4ef3-ab0b-205f236bd9c8" containerName="installer" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.207073 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9bfcff46-t99w7" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.218841 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q"] Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.219825 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.225545 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q"] Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.233896 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9bfcff46-t99w7"] Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.294532 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e389a21-c48c-4275-93d7-abe2fbed56d2-client-ca\") pod \"route-controller-manager-8697489c76-5xb2q\" (UID: \"9e389a21-c48c-4275-93d7-abe2fbed56d2\") " pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.294645 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a-client-ca\") pod \"controller-manager-9bfcff46-t99w7\" (UID: \"d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a\") " pod="openshift-controller-manager/controller-manager-9bfcff46-t99w7" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.294808 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a-serving-cert\") pod \"controller-manager-9bfcff46-t99w7\" (UID: \"d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a\") " pod="openshift-controller-manager/controller-manager-9bfcff46-t99w7" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.294928 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e389a21-c48c-4275-93d7-abe2fbed56d2-config\") pod \"route-controller-manager-8697489c76-5xb2q\" (UID: \"9e389a21-c48c-4275-93d7-abe2fbed56d2\") " pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.294965 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e389a21-c48c-4275-93d7-abe2fbed56d2-serving-cert\") pod \"route-controller-manager-8697489c76-5xb2q\" (UID: \"9e389a21-c48c-4275-93d7-abe2fbed56d2\") " pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.294995 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dns2r\" (UniqueName: \"kubernetes.io/projected/9e389a21-c48c-4275-93d7-abe2fbed56d2-kube-api-access-dns2r\") pod \"route-controller-manager-8697489c76-5xb2q\" (UID: \"9e389a21-c48c-4275-93d7-abe2fbed56d2\") " pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.295018 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvr24\" (UniqueName: \"kubernetes.io/projected/d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a-kube-api-access-wvr24\") pod \"controller-manager-9bfcff46-t99w7\" (UID: \"d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a\") " pod="openshift-controller-manager/controller-manager-9bfcff46-t99w7" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.295060 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a-config\") pod \"controller-manager-9bfcff46-t99w7\" (UID: \"d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a\") " pod="openshift-controller-manager/controller-manager-9bfcff46-t99w7" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.295144 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a-proxy-ca-bundles\") pod \"controller-manager-9bfcff46-t99w7\" (UID: \"d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a\") " pod="openshift-controller-manager/controller-manager-9bfcff46-t99w7" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.396613 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e389a21-c48c-4275-93d7-abe2fbed56d2-config\") pod \"route-controller-manager-8697489c76-5xb2q\" (UID: \"9e389a21-c48c-4275-93d7-abe2fbed56d2\") " pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.396702 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e389a21-c48c-4275-93d7-abe2fbed56d2-serving-cert\") pod \"route-controller-manager-8697489c76-5xb2q\" (UID: \"9e389a21-c48c-4275-93d7-abe2fbed56d2\") " pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.396728 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dns2r\" (UniqueName: \"kubernetes.io/projected/9e389a21-c48c-4275-93d7-abe2fbed56d2-kube-api-access-dns2r\") pod \"route-controller-manager-8697489c76-5xb2q\" (UID: \"9e389a21-c48c-4275-93d7-abe2fbed56d2\") " pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.396755 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvr24\" (UniqueName: \"kubernetes.io/projected/d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a-kube-api-access-wvr24\") pod \"controller-manager-9bfcff46-t99w7\" (UID: \"d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a\") " pod="openshift-controller-manager/controller-manager-9bfcff46-t99w7" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.396783 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a-config\") pod \"controller-manager-9bfcff46-t99w7\" (UID: \"d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a\") " pod="openshift-controller-manager/controller-manager-9bfcff46-t99w7" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.396811 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a-proxy-ca-bundles\") pod \"controller-manager-9bfcff46-t99w7\" (UID: \"d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a\") " pod="openshift-controller-manager/controller-manager-9bfcff46-t99w7" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.396856 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e389a21-c48c-4275-93d7-abe2fbed56d2-client-ca\") pod \"route-controller-manager-8697489c76-5xb2q\" (UID: \"9e389a21-c48c-4275-93d7-abe2fbed56d2\") " pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.396882 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a-client-ca\") pod \"controller-manager-9bfcff46-t99w7\" (UID: \"d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a\") " pod="openshift-controller-manager/controller-manager-9bfcff46-t99w7" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.396901 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a-serving-cert\") pod \"controller-manager-9bfcff46-t99w7\" (UID: \"d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a\") " pod="openshift-controller-manager/controller-manager-9bfcff46-t99w7" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.401833 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e389a21-c48c-4275-93d7-abe2fbed56d2-client-ca\") pod \"route-controller-manager-8697489c76-5xb2q\" (UID: \"9e389a21-c48c-4275-93d7-abe2fbed56d2\") " pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.402494 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a-client-ca\") pod \"controller-manager-9bfcff46-t99w7\" (UID: \"d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a\") " pod="openshift-controller-manager/controller-manager-9bfcff46-t99w7" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.402782 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e389a21-c48c-4275-93d7-abe2fbed56d2-config\") pod \"route-controller-manager-8697489c76-5xb2q\" (UID: \"9e389a21-c48c-4275-93d7-abe2fbed56d2\") " pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.403154 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a-config\") pod \"controller-manager-9bfcff46-t99w7\" (UID: \"d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a\") " pod="openshift-controller-manager/controller-manager-9bfcff46-t99w7" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.403739 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a-serving-cert\") pod \"controller-manager-9bfcff46-t99w7\" (UID: \"d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a\") " pod="openshift-controller-manager/controller-manager-9bfcff46-t99w7" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.405577 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e389a21-c48c-4275-93d7-abe2fbed56d2-serving-cert\") pod \"route-controller-manager-8697489c76-5xb2q\" (UID: \"9e389a21-c48c-4275-93d7-abe2fbed56d2\") " pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.412306 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a-proxy-ca-bundles\") pod \"controller-manager-9bfcff46-t99w7\" (UID: \"d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a\") " pod="openshift-controller-manager/controller-manager-9bfcff46-t99w7" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.412645 4805 generic.go:334] "Generic (PLEG): container finished" podID="976c4d52-a36d-43f0-ae70-921f30051080" containerID="fefcc9ea32b86f6445e57bc3d8633adaa8037d0a4d289d6f031fbc076fceb4e7" exitCode=0 Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.412752 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.412796 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-k8744" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.412741 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" event={"ID":"976c4d52-a36d-43f0-ae70-921f30051080","Type":"ContainerDied","Data":"fefcc9ea32b86f6445e57bc3d8633adaa8037d0a4d289d6f031fbc076fceb4e7"} Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.412892 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8" event={"ID":"976c4d52-a36d-43f0-ae70-921f30051080","Type":"ContainerDied","Data":"75944fb132c073036f3ae9167001828c20e680d261017e624b4e0b72700779f7"} Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.412926 4805 scope.go:117] "RemoveContainer" containerID="fefcc9ea32b86f6445e57bc3d8633adaa8037d0a4d289d6f031fbc076fceb4e7" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.419026 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dns2r\" (UniqueName: \"kubernetes.io/projected/9e389a21-c48c-4275-93d7-abe2fbed56d2-kube-api-access-dns2r\") pod \"route-controller-manager-8697489c76-5xb2q\" (UID: \"9e389a21-c48c-4275-93d7-abe2fbed56d2\") " pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.422897 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvr24\" (UniqueName: \"kubernetes.io/projected/d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a-kube-api-access-wvr24\") pod \"controller-manager-9bfcff46-t99w7\" (UID: \"d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a\") " pod="openshift-controller-manager/controller-manager-9bfcff46-t99w7" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.451765 4805 scope.go:117] "RemoveContainer" containerID="fefcc9ea32b86f6445e57bc3d8633adaa8037d0a4d289d6f031fbc076fceb4e7" Dec 03 00:12:23 crc kubenswrapper[4805]: E1203 00:12:23.452700 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fefcc9ea32b86f6445e57bc3d8633adaa8037d0a4d289d6f031fbc076fceb4e7\": container with ID starting with fefcc9ea32b86f6445e57bc3d8633adaa8037d0a4d289d6f031fbc076fceb4e7 not found: ID does not exist" containerID="fefcc9ea32b86f6445e57bc3d8633adaa8037d0a4d289d6f031fbc076fceb4e7" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.453426 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fefcc9ea32b86f6445e57bc3d8633adaa8037d0a4d289d6f031fbc076fceb4e7"} err="failed to get container status \"fefcc9ea32b86f6445e57bc3d8633adaa8037d0a4d289d6f031fbc076fceb4e7\": rpc error: code = NotFound desc = could not find container \"fefcc9ea32b86f6445e57bc3d8633adaa8037d0a4d289d6f031fbc076fceb4e7\": container with ID starting with fefcc9ea32b86f6445e57bc3d8633adaa8037d0a4d289d6f031fbc076fceb4e7 not found: ID does not exist" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.456783 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8"] Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.466008 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bfzn8"] Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.471469 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k8744"] Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.475427 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k8744"] Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.534023 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9bfcff46-t99w7" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.545728 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.748384 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q"] Dec 03 00:12:23 crc kubenswrapper[4805]: I1203 00:12:23.774537 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9bfcff46-t99w7"] Dec 03 00:12:23 crc kubenswrapper[4805]: W1203 00:12:23.780040 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd87d37bf_e9cf_4dc6_a5f8_8b82f8c5f50a.slice/crio-ecc1f7355a1bf3565598bbcd850a96e06828207456afe009a992fafbbe572805 WatchSource:0}: Error finding container ecc1f7355a1bf3565598bbcd850a96e06828207456afe009a992fafbbe572805: Status 404 returned error can't find the container with id ecc1f7355a1bf3565598bbcd850a96e06828207456afe009a992fafbbe572805 Dec 03 00:12:24 crc kubenswrapper[4805]: I1203 00:12:24.421119 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" event={"ID":"9e389a21-c48c-4275-93d7-abe2fbed56d2","Type":"ContainerStarted","Data":"233b42a2725edd0892bdd4b8df515300bac2b474a37b17630ce9fc0f840f39d1"} Dec 03 00:12:24 crc kubenswrapper[4805]: I1203 00:12:24.421188 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" event={"ID":"9e389a21-c48c-4275-93d7-abe2fbed56d2","Type":"ContainerStarted","Data":"57d63e404c626a4a5ef395a4c02cfaadd83e6cca617e5f13357a65f0845ceddd"} Dec 03 00:12:24 crc kubenswrapper[4805]: I1203 00:12:24.421559 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" Dec 03 00:12:24 crc kubenswrapper[4805]: I1203 00:12:24.430048 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="976c4d52-a36d-43f0-ae70-921f30051080" path="/var/lib/kubelet/pods/976c4d52-a36d-43f0-ae70-921f30051080/volumes" Dec 03 00:12:24 crc kubenswrapper[4805]: I1203 00:12:24.430797 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea247096-f6e0-490e-8fdd-3d6b6ce7a787" path="/var/lib/kubelet/pods/ea247096-f6e0-490e-8fdd-3d6b6ce7a787/volumes" Dec 03 00:12:24 crc kubenswrapper[4805]: I1203 00:12:24.431273 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9bfcff46-t99w7" Dec 03 00:12:24 crc kubenswrapper[4805]: I1203 00:12:24.431297 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9bfcff46-t99w7" event={"ID":"d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a","Type":"ContainerStarted","Data":"855a8d620e57f2a148efdf8aee1363b0cb3bde088f5de4669c7ca79b586d12ce"} Dec 03 00:12:24 crc kubenswrapper[4805]: I1203 00:12:24.431313 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9bfcff46-t99w7" event={"ID":"d87d37bf-e9cf-4dc6-a5f8-8b82f8c5f50a","Type":"ContainerStarted","Data":"ecc1f7355a1bf3565598bbcd850a96e06828207456afe009a992fafbbe572805"} Dec 03 00:12:24 crc kubenswrapper[4805]: I1203 00:12:24.431359 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" Dec 03 00:12:24 crc kubenswrapper[4805]: I1203 00:12:24.432798 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9bfcff46-t99w7" Dec 03 00:12:24 crc kubenswrapper[4805]: I1203 00:12:24.451059 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" podStartSLOduration=3.451041685 podStartE2EDuration="3.451041685s" podCreationTimestamp="2025-12-03 00:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:12:24.449073412 +0000 UTC m=+368.298036038" watchObservedRunningTime="2025-12-03 00:12:24.451041685 +0000 UTC m=+368.300004291" Dec 03 00:12:24 crc kubenswrapper[4805]: I1203 00:12:24.530595 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9bfcff46-t99w7" podStartSLOduration=3.530561984 podStartE2EDuration="3.530561984s" podCreationTimestamp="2025-12-03 00:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:12:24.527161935 +0000 UTC m=+368.376124541" watchObservedRunningTime="2025-12-03 00:12:24.530561984 +0000 UTC m=+368.379524600" Dec 03 00:12:47 crc kubenswrapper[4805]: I1203 00:12:47.811530 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:12:47 crc kubenswrapper[4805]: I1203 00:12:47.812453 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:13:17 crc kubenswrapper[4805]: I1203 00:13:17.811316 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:13:17 crc kubenswrapper[4805]: I1203 00:13:17.812173 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:13:17 crc kubenswrapper[4805]: I1203 00:13:17.812254 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:13:17 crc kubenswrapper[4805]: I1203 00:13:17.813063 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b197c067b1f4b6da9cb594f04bc4f3715facaffce52939947e8f8684e3a78115"} pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:13:17 crc kubenswrapper[4805]: I1203 00:13:17.813130 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" containerID="cri-o://b197c067b1f4b6da9cb594f04bc4f3715facaffce52939947e8f8684e3a78115" gracePeriod=600 Dec 03 00:13:18 crc kubenswrapper[4805]: I1203 00:13:18.802662 4805 generic.go:334] "Generic (PLEG): container finished" podID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerID="b197c067b1f4b6da9cb594f04bc4f3715facaffce52939947e8f8684e3a78115" exitCode=0 Dec 03 00:13:18 crc kubenswrapper[4805]: I1203 00:13:18.802733 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" event={"ID":"42d6da4d-d781-4243-b5c3-28a8cf91ef53","Type":"ContainerDied","Data":"b197c067b1f4b6da9cb594f04bc4f3715facaffce52939947e8f8684e3a78115"} Dec 03 00:13:18 crc kubenswrapper[4805]: I1203 00:13:18.803273 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" event={"ID":"42d6da4d-d781-4243-b5c3-28a8cf91ef53","Type":"ContainerStarted","Data":"2d57f60c8e52a89583b1e40f506517f73a5b87757f993a4d20080eabc8d60d72"} Dec 03 00:13:18 crc kubenswrapper[4805]: I1203 00:13:18.803343 4805 scope.go:117] "RemoveContainer" containerID="7ebc83263db68a2febcff715997832c2c1fed01d4ca51f09f5a9f112c209ea37" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.045727 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6gm8g"] Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.046941 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6gm8g" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" containerName="registry-server" containerID="cri-o://96218898bd6443b6ae7a7442834cec8d1afb736e2b9be4820cfa79a98a089cdd" gracePeriod=2 Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.244875 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h9bzd"] Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.245734 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h9bzd" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" containerName="registry-server" containerID="cri-o://84f136e40be97e749c6fb312598ccbf76e6a40cca20fc2177ff604361f386ca5" gracePeriod=2 Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.504336 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6gm8g" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.559926 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q"] Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.563658 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" podUID="9e389a21-c48c-4275-93d7-abe2fbed56d2" containerName="route-controller-manager" containerID="cri-o://233b42a2725edd0892bdd4b8df515300bac2b474a37b17630ce9fc0f840f39d1" gracePeriod=30 Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.644094 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beeb713a-2089-47a0-bda3-e51a217f0f5e-utilities\") pod \"beeb713a-2089-47a0-bda3-e51a217f0f5e\" (UID: \"beeb713a-2089-47a0-bda3-e51a217f0f5e\") " Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.644558 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dlzq\" (UniqueName: \"kubernetes.io/projected/beeb713a-2089-47a0-bda3-e51a217f0f5e-kube-api-access-6dlzq\") pod \"beeb713a-2089-47a0-bda3-e51a217f0f5e\" (UID: \"beeb713a-2089-47a0-bda3-e51a217f0f5e\") " Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.644668 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beeb713a-2089-47a0-bda3-e51a217f0f5e-catalog-content\") pod \"beeb713a-2089-47a0-bda3-e51a217f0f5e\" (UID: \"beeb713a-2089-47a0-bda3-e51a217f0f5e\") " Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.646412 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beeb713a-2089-47a0-bda3-e51a217f0f5e-utilities" (OuterVolumeSpecName: "utilities") pod "beeb713a-2089-47a0-bda3-e51a217f0f5e" (UID: "beeb713a-2089-47a0-bda3-e51a217f0f5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.654480 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beeb713a-2089-47a0-bda3-e51a217f0f5e-kube-api-access-6dlzq" (OuterVolumeSpecName: "kube-api-access-6dlzq") pod "beeb713a-2089-47a0-bda3-e51a217f0f5e" (UID: "beeb713a-2089-47a0-bda3-e51a217f0f5e"). InnerVolumeSpecName "kube-api-access-6dlzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.691455 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9bzd" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.698904 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beeb713a-2089-47a0-bda3-e51a217f0f5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "beeb713a-2089-47a0-bda3-e51a217f0f5e" (UID: "beeb713a-2089-47a0-bda3-e51a217f0f5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.747647 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beeb713a-2089-47a0-bda3-e51a217f0f5e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.747713 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beeb713a-2089-47a0-bda3-e51a217f0f5e-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.747725 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dlzq\" (UniqueName: \"kubernetes.io/projected/beeb713a-2089-47a0-bda3-e51a217f0f5e-kube-api-access-6dlzq\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.826425 4805 generic.go:334] "Generic (PLEG): container finished" podID="a653b1e4-a669-4c68-abdb-99686a4b39eb" containerID="84f136e40be97e749c6fb312598ccbf76e6a40cca20fc2177ff604361f386ca5" exitCode=0 Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.826518 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9bzd" event={"ID":"a653b1e4-a669-4c68-abdb-99686a4b39eb","Type":"ContainerDied","Data":"84f136e40be97e749c6fb312598ccbf76e6a40cca20fc2177ff604361f386ca5"} Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.826626 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9bzd" event={"ID":"a653b1e4-a669-4c68-abdb-99686a4b39eb","Type":"ContainerDied","Data":"df9411d5e4afcbd1afdc16c23769a839135e114d7f22a21529a995c5126aede0"} Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.826548 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9bzd" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.826665 4805 scope.go:117] "RemoveContainer" containerID="84f136e40be97e749c6fb312598ccbf76e6a40cca20fc2177ff604361f386ca5" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.830562 4805 generic.go:334] "Generic (PLEG): container finished" podID="9e389a21-c48c-4275-93d7-abe2fbed56d2" containerID="233b42a2725edd0892bdd4b8df515300bac2b474a37b17630ce9fc0f840f39d1" exitCode=0 Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.830638 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" event={"ID":"9e389a21-c48c-4275-93d7-abe2fbed56d2","Type":"ContainerDied","Data":"233b42a2725edd0892bdd4b8df515300bac2b474a37b17630ce9fc0f840f39d1"} Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.836840 4805 generic.go:334] "Generic (PLEG): container finished" podID="beeb713a-2089-47a0-bda3-e51a217f0f5e" containerID="96218898bd6443b6ae7a7442834cec8d1afb736e2b9be4820cfa79a98a089cdd" exitCode=0 Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.836910 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6gm8g" event={"ID":"beeb713a-2089-47a0-bda3-e51a217f0f5e","Type":"ContainerDied","Data":"96218898bd6443b6ae7a7442834cec8d1afb736e2b9be4820cfa79a98a089cdd"} Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.836957 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6gm8g" event={"ID":"beeb713a-2089-47a0-bda3-e51a217f0f5e","Type":"ContainerDied","Data":"8a255ad769db4a049926227c46f8a7849571e286a7d3e385e0bce44064490928"} Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.837100 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6gm8g" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.848338 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a653b1e4-a669-4c68-abdb-99686a4b39eb-catalog-content\") pod \"a653b1e4-a669-4c68-abdb-99686a4b39eb\" (UID: \"a653b1e4-a669-4c68-abdb-99686a4b39eb\") " Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.848408 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a653b1e4-a669-4c68-abdb-99686a4b39eb-utilities\") pod \"a653b1e4-a669-4c68-abdb-99686a4b39eb\" (UID: \"a653b1e4-a669-4c68-abdb-99686a4b39eb\") " Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.848554 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dxvr\" (UniqueName: \"kubernetes.io/projected/a653b1e4-a669-4c68-abdb-99686a4b39eb-kube-api-access-7dxvr\") pod \"a653b1e4-a669-4c68-abdb-99686a4b39eb\" (UID: \"a653b1e4-a669-4c68-abdb-99686a4b39eb\") " Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.849354 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a653b1e4-a669-4c68-abdb-99686a4b39eb-utilities" (OuterVolumeSpecName: "utilities") pod "a653b1e4-a669-4c68-abdb-99686a4b39eb" (UID: "a653b1e4-a669-4c68-abdb-99686a4b39eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.852788 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a653b1e4-a669-4c68-abdb-99686a4b39eb-kube-api-access-7dxvr" (OuterVolumeSpecName: "kube-api-access-7dxvr") pod "a653b1e4-a669-4c68-abdb-99686a4b39eb" (UID: "a653b1e4-a669-4c68-abdb-99686a4b39eb"). InnerVolumeSpecName "kube-api-access-7dxvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.856548 4805 scope.go:117] "RemoveContainer" containerID="a52a42d126568d9371264ec806d4f71a860a9f07aa3c7103a74503ac5252af68" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.874386 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6gm8g"] Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.878514 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6gm8g"] Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.903507 4805 scope.go:117] "RemoveContainer" containerID="3f0eb2c4765475c3bb09d273d655e16344433b306a14b54c1ba1881226b9b9fb" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.917136 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.925392 4805 scope.go:117] "RemoveContainer" containerID="84f136e40be97e749c6fb312598ccbf76e6a40cca20fc2177ff604361f386ca5" Dec 03 00:13:21 crc kubenswrapper[4805]: E1203 00:13:21.926270 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84f136e40be97e749c6fb312598ccbf76e6a40cca20fc2177ff604361f386ca5\": container with ID starting with 84f136e40be97e749c6fb312598ccbf76e6a40cca20fc2177ff604361f386ca5 not found: ID does not exist" containerID="84f136e40be97e749c6fb312598ccbf76e6a40cca20fc2177ff604361f386ca5" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.926314 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84f136e40be97e749c6fb312598ccbf76e6a40cca20fc2177ff604361f386ca5"} err="failed to get container status \"84f136e40be97e749c6fb312598ccbf76e6a40cca20fc2177ff604361f386ca5\": rpc error: code = NotFound desc = could not find container \"84f136e40be97e749c6fb312598ccbf76e6a40cca20fc2177ff604361f386ca5\": container with ID starting with 84f136e40be97e749c6fb312598ccbf76e6a40cca20fc2177ff604361f386ca5 not found: ID does not exist" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.926343 4805 scope.go:117] "RemoveContainer" containerID="a52a42d126568d9371264ec806d4f71a860a9f07aa3c7103a74503ac5252af68" Dec 03 00:13:21 crc kubenswrapper[4805]: E1203 00:13:21.926662 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a52a42d126568d9371264ec806d4f71a860a9f07aa3c7103a74503ac5252af68\": container with ID starting with a52a42d126568d9371264ec806d4f71a860a9f07aa3c7103a74503ac5252af68 not found: ID does not exist" containerID="a52a42d126568d9371264ec806d4f71a860a9f07aa3c7103a74503ac5252af68" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.926704 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a52a42d126568d9371264ec806d4f71a860a9f07aa3c7103a74503ac5252af68"} err="failed to get container status \"a52a42d126568d9371264ec806d4f71a860a9f07aa3c7103a74503ac5252af68\": rpc error: code = NotFound desc = could not find container \"a52a42d126568d9371264ec806d4f71a860a9f07aa3c7103a74503ac5252af68\": container with ID starting with a52a42d126568d9371264ec806d4f71a860a9f07aa3c7103a74503ac5252af68 not found: ID does not exist" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.926753 4805 scope.go:117] "RemoveContainer" containerID="3f0eb2c4765475c3bb09d273d655e16344433b306a14b54c1ba1881226b9b9fb" Dec 03 00:13:21 crc kubenswrapper[4805]: E1203 00:13:21.926977 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f0eb2c4765475c3bb09d273d655e16344433b306a14b54c1ba1881226b9b9fb\": container with ID starting with 3f0eb2c4765475c3bb09d273d655e16344433b306a14b54c1ba1881226b9b9fb not found: ID does not exist" containerID="3f0eb2c4765475c3bb09d273d655e16344433b306a14b54c1ba1881226b9b9fb" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.927007 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f0eb2c4765475c3bb09d273d655e16344433b306a14b54c1ba1881226b9b9fb"} err="failed to get container status \"3f0eb2c4765475c3bb09d273d655e16344433b306a14b54c1ba1881226b9b9fb\": rpc error: code = NotFound desc = could not find container \"3f0eb2c4765475c3bb09d273d655e16344433b306a14b54c1ba1881226b9b9fb\": container with ID starting with 3f0eb2c4765475c3bb09d273d655e16344433b306a14b54c1ba1881226b9b9fb not found: ID does not exist" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.927025 4805 scope.go:117] "RemoveContainer" containerID="96218898bd6443b6ae7a7442834cec8d1afb736e2b9be4820cfa79a98a089cdd" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.938066 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a653b1e4-a669-4c68-abdb-99686a4b39eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a653b1e4-a669-4c68-abdb-99686a4b39eb" (UID: "a653b1e4-a669-4c68-abdb-99686a4b39eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.950612 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dxvr\" (UniqueName: \"kubernetes.io/projected/a653b1e4-a669-4c68-abdb-99686a4b39eb-kube-api-access-7dxvr\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.950655 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a653b1e4-a669-4c68-abdb-99686a4b39eb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.950668 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a653b1e4-a669-4c68-abdb-99686a4b39eb-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.960283 4805 scope.go:117] "RemoveContainer" containerID="6e4893b7cd66d5bed98698c28d7a783b04e9b8abde7662bf0e38a3ecde69959b" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.978502 4805 scope.go:117] "RemoveContainer" containerID="32a2bea7433d07a4dafed3cf18b6dae7660de791284b8ddb8409fcc52ee302b4" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.993494 4805 scope.go:117] "RemoveContainer" containerID="96218898bd6443b6ae7a7442834cec8d1afb736e2b9be4820cfa79a98a089cdd" Dec 03 00:13:21 crc kubenswrapper[4805]: E1203 00:13:21.993955 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96218898bd6443b6ae7a7442834cec8d1afb736e2b9be4820cfa79a98a089cdd\": container with ID starting with 96218898bd6443b6ae7a7442834cec8d1afb736e2b9be4820cfa79a98a089cdd not found: ID does not exist" containerID="96218898bd6443b6ae7a7442834cec8d1afb736e2b9be4820cfa79a98a089cdd" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.993993 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96218898bd6443b6ae7a7442834cec8d1afb736e2b9be4820cfa79a98a089cdd"} err="failed to get container status \"96218898bd6443b6ae7a7442834cec8d1afb736e2b9be4820cfa79a98a089cdd\": rpc error: code = NotFound desc = could not find container \"96218898bd6443b6ae7a7442834cec8d1afb736e2b9be4820cfa79a98a089cdd\": container with ID starting with 96218898bd6443b6ae7a7442834cec8d1afb736e2b9be4820cfa79a98a089cdd not found: ID does not exist" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.994024 4805 scope.go:117] "RemoveContainer" containerID="6e4893b7cd66d5bed98698c28d7a783b04e9b8abde7662bf0e38a3ecde69959b" Dec 03 00:13:21 crc kubenswrapper[4805]: E1203 00:13:21.994244 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e4893b7cd66d5bed98698c28d7a783b04e9b8abde7662bf0e38a3ecde69959b\": container with ID starting with 6e4893b7cd66d5bed98698c28d7a783b04e9b8abde7662bf0e38a3ecde69959b not found: ID does not exist" containerID="6e4893b7cd66d5bed98698c28d7a783b04e9b8abde7662bf0e38a3ecde69959b" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.994293 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e4893b7cd66d5bed98698c28d7a783b04e9b8abde7662bf0e38a3ecde69959b"} err="failed to get container status \"6e4893b7cd66d5bed98698c28d7a783b04e9b8abde7662bf0e38a3ecde69959b\": rpc error: code = NotFound desc = could not find container \"6e4893b7cd66d5bed98698c28d7a783b04e9b8abde7662bf0e38a3ecde69959b\": container with ID starting with 6e4893b7cd66d5bed98698c28d7a783b04e9b8abde7662bf0e38a3ecde69959b not found: ID does not exist" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.994305 4805 scope.go:117] "RemoveContainer" containerID="32a2bea7433d07a4dafed3cf18b6dae7660de791284b8ddb8409fcc52ee302b4" Dec 03 00:13:21 crc kubenswrapper[4805]: E1203 00:13:21.994519 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32a2bea7433d07a4dafed3cf18b6dae7660de791284b8ddb8409fcc52ee302b4\": container with ID starting with 32a2bea7433d07a4dafed3cf18b6dae7660de791284b8ddb8409fcc52ee302b4 not found: ID does not exist" containerID="32a2bea7433d07a4dafed3cf18b6dae7660de791284b8ddb8409fcc52ee302b4" Dec 03 00:13:21 crc kubenswrapper[4805]: I1203 00:13:21.994541 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32a2bea7433d07a4dafed3cf18b6dae7660de791284b8ddb8409fcc52ee302b4"} err="failed to get container status \"32a2bea7433d07a4dafed3cf18b6dae7660de791284b8ddb8409fcc52ee302b4\": rpc error: code = NotFound desc = could not find container \"32a2bea7433d07a4dafed3cf18b6dae7660de791284b8ddb8409fcc52ee302b4\": container with ID starting with 32a2bea7433d07a4dafed3cf18b6dae7660de791284b8ddb8409fcc52ee302b4 not found: ID does not exist" Dec 03 00:13:22 crc kubenswrapper[4805]: I1203 00:13:22.051607 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e389a21-c48c-4275-93d7-abe2fbed56d2-config\") pod \"9e389a21-c48c-4275-93d7-abe2fbed56d2\" (UID: \"9e389a21-c48c-4275-93d7-abe2fbed56d2\") " Dec 03 00:13:22 crc kubenswrapper[4805]: I1203 00:13:22.051726 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dns2r\" (UniqueName: \"kubernetes.io/projected/9e389a21-c48c-4275-93d7-abe2fbed56d2-kube-api-access-dns2r\") pod \"9e389a21-c48c-4275-93d7-abe2fbed56d2\" (UID: \"9e389a21-c48c-4275-93d7-abe2fbed56d2\") " Dec 03 00:13:22 crc kubenswrapper[4805]: I1203 00:13:22.051803 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e389a21-c48c-4275-93d7-abe2fbed56d2-serving-cert\") pod \"9e389a21-c48c-4275-93d7-abe2fbed56d2\" (UID: \"9e389a21-c48c-4275-93d7-abe2fbed56d2\") " Dec 03 00:13:22 crc kubenswrapper[4805]: I1203 00:13:22.051835 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e389a21-c48c-4275-93d7-abe2fbed56d2-client-ca\") pod \"9e389a21-c48c-4275-93d7-abe2fbed56d2\" (UID: \"9e389a21-c48c-4275-93d7-abe2fbed56d2\") " Dec 03 00:13:22 crc kubenswrapper[4805]: I1203 00:13:22.052778 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e389a21-c48c-4275-93d7-abe2fbed56d2-client-ca" (OuterVolumeSpecName: "client-ca") pod "9e389a21-c48c-4275-93d7-abe2fbed56d2" (UID: "9e389a21-c48c-4275-93d7-abe2fbed56d2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:13:22 crc kubenswrapper[4805]: I1203 00:13:22.052905 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e389a21-c48c-4275-93d7-abe2fbed56d2-config" (OuterVolumeSpecName: "config") pod "9e389a21-c48c-4275-93d7-abe2fbed56d2" (UID: "9e389a21-c48c-4275-93d7-abe2fbed56d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:13:22 crc kubenswrapper[4805]: I1203 00:13:22.056508 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e389a21-c48c-4275-93d7-abe2fbed56d2-kube-api-access-dns2r" (OuterVolumeSpecName: "kube-api-access-dns2r") pod "9e389a21-c48c-4275-93d7-abe2fbed56d2" (UID: "9e389a21-c48c-4275-93d7-abe2fbed56d2"). InnerVolumeSpecName "kube-api-access-dns2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:13:22 crc kubenswrapper[4805]: I1203 00:13:22.057128 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e389a21-c48c-4275-93d7-abe2fbed56d2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9e389a21-c48c-4275-93d7-abe2fbed56d2" (UID: "9e389a21-c48c-4275-93d7-abe2fbed56d2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:13:22 crc kubenswrapper[4805]: I1203 00:13:22.154375 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e389a21-c48c-4275-93d7-abe2fbed56d2-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:22 crc kubenswrapper[4805]: I1203 00:13:22.155561 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e389a21-c48c-4275-93d7-abe2fbed56d2-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:22 crc kubenswrapper[4805]: I1203 00:13:22.155617 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dns2r\" (UniqueName: \"kubernetes.io/projected/9e389a21-c48c-4275-93d7-abe2fbed56d2-kube-api-access-dns2r\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:22 crc kubenswrapper[4805]: I1203 00:13:22.155634 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e389a21-c48c-4275-93d7-abe2fbed56d2-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:22 crc kubenswrapper[4805]: I1203 00:13:22.181594 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h9bzd"] Dec 03 00:13:22 crc kubenswrapper[4805]: I1203 00:13:22.189014 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h9bzd"] Dec 03 00:13:22 crc kubenswrapper[4805]: I1203 00:13:22.433743 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" path="/var/lib/kubelet/pods/a653b1e4-a669-4c68-abdb-99686a4b39eb/volumes" Dec 03 00:13:22 crc kubenswrapper[4805]: I1203 00:13:22.434677 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" path="/var/lib/kubelet/pods/beeb713a-2089-47a0-bda3-e51a217f0f5e/volumes" Dec 03 00:13:22 crc kubenswrapper[4805]: I1203 00:13:22.851974 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" event={"ID":"9e389a21-c48c-4275-93d7-abe2fbed56d2","Type":"ContainerDied","Data":"57d63e404c626a4a5ef395a4c02cfaadd83e6cca617e5f13357a65f0845ceddd"} Dec 03 00:13:22 crc kubenswrapper[4805]: I1203 00:13:22.852042 4805 scope.go:117] "RemoveContainer" containerID="233b42a2725edd0892bdd4b8df515300bac2b474a37b17630ce9fc0f840f39d1" Dec 03 00:13:22 crc kubenswrapper[4805]: I1203 00:13:22.852001 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q" Dec 03 00:13:22 crc kubenswrapper[4805]: I1203 00:13:22.884040 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q"] Dec 03 00:13:22 crc kubenswrapper[4805]: I1203 00:13:22.889509 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8697489c76-5xb2q"] Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.247724 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f88c5b7c-bb5dl"] Dec 03 00:13:23 crc kubenswrapper[4805]: E1203 00:13:23.248066 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e389a21-c48c-4275-93d7-abe2fbed56d2" containerName="route-controller-manager" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.248088 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e389a21-c48c-4275-93d7-abe2fbed56d2" containerName="route-controller-manager" Dec 03 00:13:23 crc kubenswrapper[4805]: E1203 00:13:23.248122 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" containerName="extract-content" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.248134 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" containerName="extract-content" Dec 03 00:13:23 crc kubenswrapper[4805]: E1203 00:13:23.248153 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" containerName="extract-utilities" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.248167 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" containerName="extract-utilities" Dec 03 00:13:23 crc kubenswrapper[4805]: E1203 00:13:23.248190 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" containerName="extract-content" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.248324 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" containerName="extract-content" Dec 03 00:13:23 crc kubenswrapper[4805]: E1203 00:13:23.248352 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" containerName="registry-server" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.248365 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" containerName="registry-server" Dec 03 00:13:23 crc kubenswrapper[4805]: E1203 00:13:23.248384 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" containerName="extract-utilities" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.248397 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" containerName="extract-utilities" Dec 03 00:13:23 crc kubenswrapper[4805]: E1203 00:13:23.248416 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" containerName="registry-server" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.248428 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" containerName="registry-server" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.248600 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e389a21-c48c-4275-93d7-abe2fbed56d2" containerName="route-controller-manager" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.248626 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="beeb713a-2089-47a0-bda3-e51a217f0f5e" containerName="registry-server" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.248650 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a653b1e4-a669-4c68-abdb-99686a4b39eb" containerName="registry-server" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.249303 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75f88c5b7c-bb5dl" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.254468 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.254785 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.254924 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.255271 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.255630 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.255830 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.261599 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f88c5b7c-bb5dl"] Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.273281 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c4682d6-b6e9-4d6a-a46a-db0898cf21e4-config\") pod \"route-controller-manager-75f88c5b7c-bb5dl\" (UID: \"6c4682d6-b6e9-4d6a-a46a-db0898cf21e4\") " pod="openshift-route-controller-manager/route-controller-manager-75f88c5b7c-bb5dl" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.273347 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c4682d6-b6e9-4d6a-a46a-db0898cf21e4-client-ca\") pod \"route-controller-manager-75f88c5b7c-bb5dl\" (UID: \"6c4682d6-b6e9-4d6a-a46a-db0898cf21e4\") " pod="openshift-route-controller-manager/route-controller-manager-75f88c5b7c-bb5dl" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.273382 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rkq8\" (UniqueName: \"kubernetes.io/projected/6c4682d6-b6e9-4d6a-a46a-db0898cf21e4-kube-api-access-9rkq8\") pod \"route-controller-manager-75f88c5b7c-bb5dl\" (UID: \"6c4682d6-b6e9-4d6a-a46a-db0898cf21e4\") " pod="openshift-route-controller-manager/route-controller-manager-75f88c5b7c-bb5dl" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.273568 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c4682d6-b6e9-4d6a-a46a-db0898cf21e4-serving-cert\") pod \"route-controller-manager-75f88c5b7c-bb5dl\" (UID: \"6c4682d6-b6e9-4d6a-a46a-db0898cf21e4\") " pod="openshift-route-controller-manager/route-controller-manager-75f88c5b7c-bb5dl" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.375318 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c4682d6-b6e9-4d6a-a46a-db0898cf21e4-config\") pod \"route-controller-manager-75f88c5b7c-bb5dl\" (UID: \"6c4682d6-b6e9-4d6a-a46a-db0898cf21e4\") " pod="openshift-route-controller-manager/route-controller-manager-75f88c5b7c-bb5dl" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.375434 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c4682d6-b6e9-4d6a-a46a-db0898cf21e4-client-ca\") pod \"route-controller-manager-75f88c5b7c-bb5dl\" (UID: \"6c4682d6-b6e9-4d6a-a46a-db0898cf21e4\") " pod="openshift-route-controller-manager/route-controller-manager-75f88c5b7c-bb5dl" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.375508 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rkq8\" (UniqueName: \"kubernetes.io/projected/6c4682d6-b6e9-4d6a-a46a-db0898cf21e4-kube-api-access-9rkq8\") pod \"route-controller-manager-75f88c5b7c-bb5dl\" (UID: \"6c4682d6-b6e9-4d6a-a46a-db0898cf21e4\") " pod="openshift-route-controller-manager/route-controller-manager-75f88c5b7c-bb5dl" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.375596 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c4682d6-b6e9-4d6a-a46a-db0898cf21e4-serving-cert\") pod \"route-controller-manager-75f88c5b7c-bb5dl\" (UID: \"6c4682d6-b6e9-4d6a-a46a-db0898cf21e4\") " pod="openshift-route-controller-manager/route-controller-manager-75f88c5b7c-bb5dl" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.377008 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c4682d6-b6e9-4d6a-a46a-db0898cf21e4-client-ca\") pod \"route-controller-manager-75f88c5b7c-bb5dl\" (UID: \"6c4682d6-b6e9-4d6a-a46a-db0898cf21e4\") " pod="openshift-route-controller-manager/route-controller-manager-75f88c5b7c-bb5dl" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.377250 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c4682d6-b6e9-4d6a-a46a-db0898cf21e4-config\") pod \"route-controller-manager-75f88c5b7c-bb5dl\" (UID: \"6c4682d6-b6e9-4d6a-a46a-db0898cf21e4\") " pod="openshift-route-controller-manager/route-controller-manager-75f88c5b7c-bb5dl" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.383540 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c4682d6-b6e9-4d6a-a46a-db0898cf21e4-serving-cert\") pod \"route-controller-manager-75f88c5b7c-bb5dl\" (UID: \"6c4682d6-b6e9-4d6a-a46a-db0898cf21e4\") " pod="openshift-route-controller-manager/route-controller-manager-75f88c5b7c-bb5dl" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.408678 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rkq8\" (UniqueName: \"kubernetes.io/projected/6c4682d6-b6e9-4d6a-a46a-db0898cf21e4-kube-api-access-9rkq8\") pod \"route-controller-manager-75f88c5b7c-bb5dl\" (UID: \"6c4682d6-b6e9-4d6a-a46a-db0898cf21e4\") " pod="openshift-route-controller-manager/route-controller-manager-75f88c5b7c-bb5dl" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.441941 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-26vk7"] Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.442227 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-26vk7" podUID="809c4c09-4e7d-40b5-9964-7b09c2a19ea5" containerName="registry-server" containerID="cri-o://f1d8c8c0050c7907e23e4b132c02d7aa60171f2717ca3c005076d5194c6067ed" gracePeriod=2 Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.568678 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75f88c5b7c-bb5dl" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.645645 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zd9dv"] Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.645922 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zd9dv" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" containerName="registry-server" containerID="cri-o://8d73bb41f68b93a0e303373297cfcc037a5547f5ea78acdd1c3e9f3f222fa9e1" gracePeriod=2 Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.809355 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f88c5b7c-bb5dl"] Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.821805 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26vk7" Dec 03 00:13:23 crc kubenswrapper[4805]: W1203 00:13:23.824091 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c4682d6_b6e9_4d6a_a46a_db0898cf21e4.slice/crio-2b9cd9e9e10c29b7031d07a00a997f24e1dba276f85c799f89f2f79aa12c4b33 WatchSource:0}: Error finding container 2b9cd9e9e10c29b7031d07a00a997f24e1dba276f85c799f89f2f79aa12c4b33: Status 404 returned error can't find the container with id 2b9cd9e9e10c29b7031d07a00a997f24e1dba276f85c799f89f2f79aa12c4b33 Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.865854 4805 generic.go:334] "Generic (PLEG): container finished" podID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" containerID="8d73bb41f68b93a0e303373297cfcc037a5547f5ea78acdd1c3e9f3f222fa9e1" exitCode=0 Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.865910 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zd9dv" event={"ID":"cb20f5d6-0456-4fb8-8435-407ccfc9319f","Type":"ContainerDied","Data":"8d73bb41f68b93a0e303373297cfcc037a5547f5ea78acdd1c3e9f3f222fa9e1"} Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.867414 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75f88c5b7c-bb5dl" event={"ID":"6c4682d6-b6e9-4d6a-a46a-db0898cf21e4","Type":"ContainerStarted","Data":"2b9cd9e9e10c29b7031d07a00a997f24e1dba276f85c799f89f2f79aa12c4b33"} Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.875990 4805 generic.go:334] "Generic (PLEG): container finished" podID="809c4c09-4e7d-40b5-9964-7b09c2a19ea5" containerID="f1d8c8c0050c7907e23e4b132c02d7aa60171f2717ca3c005076d5194c6067ed" exitCode=0 Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.876052 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26vk7" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.876110 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26vk7" event={"ID":"809c4c09-4e7d-40b5-9964-7b09c2a19ea5","Type":"ContainerDied","Data":"f1d8c8c0050c7907e23e4b132c02d7aa60171f2717ca3c005076d5194c6067ed"} Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.876182 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26vk7" event={"ID":"809c4c09-4e7d-40b5-9964-7b09c2a19ea5","Type":"ContainerDied","Data":"3efe5e003ab05fe53906d263fcb7e3a6634aa612805f67864820359ac916ff14"} Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.876249 4805 scope.go:117] "RemoveContainer" containerID="f1d8c8c0050c7907e23e4b132c02d7aa60171f2717ca3c005076d5194c6067ed" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.896020 4805 scope.go:117] "RemoveContainer" containerID="3f31d7197a165a431635289a95981a8f9fb4396ff412ff601be71b9987254678" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.910508 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/809c4c09-4e7d-40b5-9964-7b09c2a19ea5-catalog-content\") pod \"809c4c09-4e7d-40b5-9964-7b09c2a19ea5\" (UID: \"809c4c09-4e7d-40b5-9964-7b09c2a19ea5\") " Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.910548 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c48zp\" (UniqueName: \"kubernetes.io/projected/809c4c09-4e7d-40b5-9964-7b09c2a19ea5-kube-api-access-c48zp\") pod \"809c4c09-4e7d-40b5-9964-7b09c2a19ea5\" (UID: \"809c4c09-4e7d-40b5-9964-7b09c2a19ea5\") " Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.910596 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/809c4c09-4e7d-40b5-9964-7b09c2a19ea5-utilities\") pod \"809c4c09-4e7d-40b5-9964-7b09c2a19ea5\" (UID: \"809c4c09-4e7d-40b5-9964-7b09c2a19ea5\") " Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.912395 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/809c4c09-4e7d-40b5-9964-7b09c2a19ea5-utilities" (OuterVolumeSpecName: "utilities") pod "809c4c09-4e7d-40b5-9964-7b09c2a19ea5" (UID: "809c4c09-4e7d-40b5-9964-7b09c2a19ea5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.919697 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/809c4c09-4e7d-40b5-9964-7b09c2a19ea5-kube-api-access-c48zp" (OuterVolumeSpecName: "kube-api-access-c48zp") pod "809c4c09-4e7d-40b5-9964-7b09c2a19ea5" (UID: "809c4c09-4e7d-40b5-9964-7b09c2a19ea5"). InnerVolumeSpecName "kube-api-access-c48zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.928746 4805 scope.go:117] "RemoveContainer" containerID="e3e9f5cb457bfeae2bfc4dea5fb9ea30d02aaad5a6c16cfe945d9a9b44471988" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.935103 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/809c4c09-4e7d-40b5-9964-7b09c2a19ea5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "809c4c09-4e7d-40b5-9964-7b09c2a19ea5" (UID: "809c4c09-4e7d-40b5-9964-7b09c2a19ea5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.955043 4805 scope.go:117] "RemoveContainer" containerID="f1d8c8c0050c7907e23e4b132c02d7aa60171f2717ca3c005076d5194c6067ed" Dec 03 00:13:23 crc kubenswrapper[4805]: E1203 00:13:23.955701 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1d8c8c0050c7907e23e4b132c02d7aa60171f2717ca3c005076d5194c6067ed\": container with ID starting with f1d8c8c0050c7907e23e4b132c02d7aa60171f2717ca3c005076d5194c6067ed not found: ID does not exist" containerID="f1d8c8c0050c7907e23e4b132c02d7aa60171f2717ca3c005076d5194c6067ed" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.955759 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d8c8c0050c7907e23e4b132c02d7aa60171f2717ca3c005076d5194c6067ed"} err="failed to get container status \"f1d8c8c0050c7907e23e4b132c02d7aa60171f2717ca3c005076d5194c6067ed\": rpc error: code = NotFound desc = could not find container \"f1d8c8c0050c7907e23e4b132c02d7aa60171f2717ca3c005076d5194c6067ed\": container with ID starting with f1d8c8c0050c7907e23e4b132c02d7aa60171f2717ca3c005076d5194c6067ed not found: ID does not exist" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.955794 4805 scope.go:117] "RemoveContainer" containerID="3f31d7197a165a431635289a95981a8f9fb4396ff412ff601be71b9987254678" Dec 03 00:13:23 crc kubenswrapper[4805]: E1203 00:13:23.956122 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f31d7197a165a431635289a95981a8f9fb4396ff412ff601be71b9987254678\": container with ID starting with 3f31d7197a165a431635289a95981a8f9fb4396ff412ff601be71b9987254678 not found: ID does not exist" containerID="3f31d7197a165a431635289a95981a8f9fb4396ff412ff601be71b9987254678" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.956161 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f31d7197a165a431635289a95981a8f9fb4396ff412ff601be71b9987254678"} err="failed to get container status \"3f31d7197a165a431635289a95981a8f9fb4396ff412ff601be71b9987254678\": rpc error: code = NotFound desc = could not find container \"3f31d7197a165a431635289a95981a8f9fb4396ff412ff601be71b9987254678\": container with ID starting with 3f31d7197a165a431635289a95981a8f9fb4396ff412ff601be71b9987254678 not found: ID does not exist" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.956188 4805 scope.go:117] "RemoveContainer" containerID="e3e9f5cb457bfeae2bfc4dea5fb9ea30d02aaad5a6c16cfe945d9a9b44471988" Dec 03 00:13:23 crc kubenswrapper[4805]: E1203 00:13:23.956570 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3e9f5cb457bfeae2bfc4dea5fb9ea30d02aaad5a6c16cfe945d9a9b44471988\": container with ID starting with e3e9f5cb457bfeae2bfc4dea5fb9ea30d02aaad5a6c16cfe945d9a9b44471988 not found: ID does not exist" containerID="e3e9f5cb457bfeae2bfc4dea5fb9ea30d02aaad5a6c16cfe945d9a9b44471988" Dec 03 00:13:23 crc kubenswrapper[4805]: I1203 00:13:23.956608 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3e9f5cb457bfeae2bfc4dea5fb9ea30d02aaad5a6c16cfe945d9a9b44471988"} err="failed to get container status \"e3e9f5cb457bfeae2bfc4dea5fb9ea30d02aaad5a6c16cfe945d9a9b44471988\": rpc error: code = NotFound desc = could not find container \"e3e9f5cb457bfeae2bfc4dea5fb9ea30d02aaad5a6c16cfe945d9a9b44471988\": container with ID starting with e3e9f5cb457bfeae2bfc4dea5fb9ea30d02aaad5a6c16cfe945d9a9b44471988 not found: ID does not exist" Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.001783 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zd9dv" Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.012304 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/809c4c09-4e7d-40b5-9964-7b09c2a19ea5-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.012609 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/809c4c09-4e7d-40b5-9964-7b09c2a19ea5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.012702 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c48zp\" (UniqueName: \"kubernetes.io/projected/809c4c09-4e7d-40b5-9964-7b09c2a19ea5-kube-api-access-c48zp\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.115006 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsjz9\" (UniqueName: \"kubernetes.io/projected/cb20f5d6-0456-4fb8-8435-407ccfc9319f-kube-api-access-bsjz9\") pod \"cb20f5d6-0456-4fb8-8435-407ccfc9319f\" (UID: \"cb20f5d6-0456-4fb8-8435-407ccfc9319f\") " Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.115085 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb20f5d6-0456-4fb8-8435-407ccfc9319f-catalog-content\") pod \"cb20f5d6-0456-4fb8-8435-407ccfc9319f\" (UID: \"cb20f5d6-0456-4fb8-8435-407ccfc9319f\") " Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.115233 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb20f5d6-0456-4fb8-8435-407ccfc9319f-utilities\") pod \"cb20f5d6-0456-4fb8-8435-407ccfc9319f\" (UID: \"cb20f5d6-0456-4fb8-8435-407ccfc9319f\") " Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.116587 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb20f5d6-0456-4fb8-8435-407ccfc9319f-utilities" (OuterVolumeSpecName: "utilities") pod "cb20f5d6-0456-4fb8-8435-407ccfc9319f" (UID: "cb20f5d6-0456-4fb8-8435-407ccfc9319f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.119935 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb20f5d6-0456-4fb8-8435-407ccfc9319f-kube-api-access-bsjz9" (OuterVolumeSpecName: "kube-api-access-bsjz9") pod "cb20f5d6-0456-4fb8-8435-407ccfc9319f" (UID: "cb20f5d6-0456-4fb8-8435-407ccfc9319f"). InnerVolumeSpecName "kube-api-access-bsjz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.216640 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-26vk7"] Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.217408 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsjz9\" (UniqueName: \"kubernetes.io/projected/cb20f5d6-0456-4fb8-8435-407ccfc9319f-kube-api-access-bsjz9\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.217452 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb20f5d6-0456-4fb8-8435-407ccfc9319f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.224954 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-26vk7"] Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.258121 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb20f5d6-0456-4fb8-8435-407ccfc9319f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb20f5d6-0456-4fb8-8435-407ccfc9319f" (UID: "cb20f5d6-0456-4fb8-8435-407ccfc9319f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.319400 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb20f5d6-0456-4fb8-8435-407ccfc9319f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.431778 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="809c4c09-4e7d-40b5-9964-7b09c2a19ea5" path="/var/lib/kubelet/pods/809c4c09-4e7d-40b5-9964-7b09c2a19ea5/volumes" Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.432478 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e389a21-c48c-4275-93d7-abe2fbed56d2" path="/var/lib/kubelet/pods/9e389a21-c48c-4275-93d7-abe2fbed56d2/volumes" Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.888700 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zd9dv" event={"ID":"cb20f5d6-0456-4fb8-8435-407ccfc9319f","Type":"ContainerDied","Data":"89b91d3813bdd10e05aaf2e79e85e044fa4e047c19cce7a2ad00e13c378c907d"} Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.888766 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zd9dv" Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.888772 4805 scope.go:117] "RemoveContainer" containerID="8d73bb41f68b93a0e303373297cfcc037a5547f5ea78acdd1c3e9f3f222fa9e1" Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.890122 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75f88c5b7c-bb5dl" event={"ID":"6c4682d6-b6e9-4d6a-a46a-db0898cf21e4","Type":"ContainerStarted","Data":"03b20c69b4228c24c6647d15a7778932f2bb15a0e875fc9e58d49bbe00f34680"} Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.890723 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75f88c5b7c-bb5dl" Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.897443 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75f88c5b7c-bb5dl" Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.910319 4805 scope.go:117] "RemoveContainer" containerID="4b97199cdde73a713d0d305f570d3e3013abbf5dbe99f57ff995359e52eb0b4b" Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.934858 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zd9dv"] Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.951445 4805 scope.go:117] "RemoveContainer" containerID="3438bacc74cd03ca2abb42cd44281714addefef15961f521135ae87c2e4b8084" Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.960389 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zd9dv"] Dec 03 00:13:24 crc kubenswrapper[4805]: I1203 00:13:24.965186 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75f88c5b7c-bb5dl" podStartSLOduration=3.965161784 podStartE2EDuration="3.965161784s" podCreationTimestamp="2025-12-03 00:13:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:13:24.9638654 +0000 UTC m=+428.812828026" watchObservedRunningTime="2025-12-03 00:13:24.965161784 +0000 UTC m=+428.814124380" Dec 03 00:13:26 crc kubenswrapper[4805]: I1203 00:13:26.440454 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" path="/var/lib/kubelet/pods/cb20f5d6-0456-4fb8-8435-407ccfc9319f/volumes" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.311683 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fzrz8"] Dec 03 00:13:28 crc kubenswrapper[4805]: E1203 00:13:28.312307 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" containerName="extract-utilities" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.312324 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" containerName="extract-utilities" Dec 03 00:13:28 crc kubenswrapper[4805]: E1203 00:13:28.312334 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" containerName="registry-server" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.312340 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" containerName="registry-server" Dec 03 00:13:28 crc kubenswrapper[4805]: E1203 00:13:28.312407 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="809c4c09-4e7d-40b5-9964-7b09c2a19ea5" containerName="registry-server" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.312415 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="809c4c09-4e7d-40b5-9964-7b09c2a19ea5" containerName="registry-server" Dec 03 00:13:28 crc kubenswrapper[4805]: E1203 00:13:28.312430 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" containerName="extract-content" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.312437 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" containerName="extract-content" Dec 03 00:13:28 crc kubenswrapper[4805]: E1203 00:13:28.312444 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="809c4c09-4e7d-40b5-9964-7b09c2a19ea5" containerName="extract-utilities" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.312450 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="809c4c09-4e7d-40b5-9964-7b09c2a19ea5" containerName="extract-utilities" Dec 03 00:13:28 crc kubenswrapper[4805]: E1203 00:13:28.312459 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="809c4c09-4e7d-40b5-9964-7b09c2a19ea5" containerName="extract-content" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.312465 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="809c4c09-4e7d-40b5-9964-7b09c2a19ea5" containerName="extract-content" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.312575 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb20f5d6-0456-4fb8-8435-407ccfc9319f" containerName="registry-server" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.312592 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="809c4c09-4e7d-40b5-9964-7b09c2a19ea5" containerName="registry-server" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.313017 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.331385 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fzrz8"] Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.375623 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4c198fd8-0813-4ab3-b0e1-ff132c4918b0-registry-certificates\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.375709 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qn72\" (UniqueName: \"kubernetes.io/projected/4c198fd8-0813-4ab3-b0e1-ff132c4918b0-kube-api-access-4qn72\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.375782 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4c198fd8-0813-4ab3-b0e1-ff132c4918b0-bound-sa-token\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.375889 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4c198fd8-0813-4ab3-b0e1-ff132c4918b0-registry-tls\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.375978 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c198fd8-0813-4ab3-b0e1-ff132c4918b0-trusted-ca\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.376084 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.376140 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4c198fd8-0813-4ab3-b0e1-ff132c4918b0-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.376304 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4c198fd8-0813-4ab3-b0e1-ff132c4918b0-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.402579 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.478251 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4c198fd8-0813-4ab3-b0e1-ff132c4918b0-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.478328 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4c198fd8-0813-4ab3-b0e1-ff132c4918b0-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.478399 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4c198fd8-0813-4ab3-b0e1-ff132c4918b0-registry-certificates\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.478430 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qn72\" (UniqueName: \"kubernetes.io/projected/4c198fd8-0813-4ab3-b0e1-ff132c4918b0-kube-api-access-4qn72\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.478471 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4c198fd8-0813-4ab3-b0e1-ff132c4918b0-bound-sa-token\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.478495 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4c198fd8-0813-4ab3-b0e1-ff132c4918b0-registry-tls\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.478527 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c198fd8-0813-4ab3-b0e1-ff132c4918b0-trusted-ca\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.479482 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4c198fd8-0813-4ab3-b0e1-ff132c4918b0-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.480134 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c198fd8-0813-4ab3-b0e1-ff132c4918b0-trusted-ca\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.480231 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4c198fd8-0813-4ab3-b0e1-ff132c4918b0-registry-certificates\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.490138 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4c198fd8-0813-4ab3-b0e1-ff132c4918b0-registry-tls\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.490176 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4c198fd8-0813-4ab3-b0e1-ff132c4918b0-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.494064 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4c198fd8-0813-4ab3-b0e1-ff132c4918b0-bound-sa-token\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.496570 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qn72\" (UniqueName: \"kubernetes.io/projected/4c198fd8-0813-4ab3-b0e1-ff132c4918b0-kube-api-access-4qn72\") pod \"image-registry-66df7c8f76-fzrz8\" (UID: \"4c198fd8-0813-4ab3-b0e1-ff132c4918b0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.630642 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.884271 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fzrz8"] Dec 03 00:13:28 crc kubenswrapper[4805]: I1203 00:13:28.935588 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" event={"ID":"4c198fd8-0813-4ab3-b0e1-ff132c4918b0","Type":"ContainerStarted","Data":"470e10782292cf07b311accd84fe6529ec65e284490a5cd070914d358588a671"} Dec 03 00:13:29 crc kubenswrapper[4805]: I1203 00:13:29.944505 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" event={"ID":"4c198fd8-0813-4ab3-b0e1-ff132c4918b0","Type":"ContainerStarted","Data":"4e4f3a310e9e86abc525ae10ddb8b66bd0b9e3050cc3db02434445a55194acb6"} Dec 03 00:13:29 crc kubenswrapper[4805]: I1203 00:13:29.945050 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:29 crc kubenswrapper[4805]: I1203 00:13:29.965735 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" podStartSLOduration=1.965709237 podStartE2EDuration="1.965709237s" podCreationTimestamp="2025-12-03 00:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:13:29.96469424 +0000 UTC m=+433.813656856" watchObservedRunningTime="2025-12-03 00:13:29.965709237 +0000 UTC m=+433.814671833" Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.189599 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vtztk"] Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.190835 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vtztk" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" containerName="registry-server" containerID="cri-o://419cdd6923cedafe19d5a4fe6f5ebfe6bad6fd8ff807aa79ec7ee22f4b866cb8" gracePeriod=30 Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.203286 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-htqft"] Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.203961 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-htqft" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" containerName="registry-server" containerID="cri-o://105f2a7bd585ad3b1527246e03750f01c68e5f95b7e355308a5bd70efb844dc8" gracePeriod=30 Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.214739 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9rm6z"] Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.215034 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" podUID="61cfa86d-9594-400f-991c-4819838ee49d" containerName="marketplace-operator" containerID="cri-o://d7245fe6bd32249972d2c1d392a4f51b4f5ebf9e70be8377cc24ae3ecb550072" gracePeriod=30 Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.233633 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9hlqz"] Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.234012 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9hlqz" podUID="788046c6-ddf0-4f22-bc41-260efe363420" containerName="registry-server" containerID="cri-o://7c2fdba6a3ca3b3fe8f9fc098769e9ea960cb303c585475d8babb3804503f769" gracePeriod=30 Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.248493 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r647s"] Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.249357 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r647s" podUID="b4f086f6-ed16-47f7-a630-f480a86f4954" containerName="registry-server" containerID="cri-o://fc0f9bb92aa6f1bf26a573fc866f7afe42d647b17e963678eef0a2fda4a36d08" gracePeriod=30 Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.261778 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cks6r"] Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.263274 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cks6r" Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.275752 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cks6r"] Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.333965 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cad4d2ba-cd8b-4886-90cd-ff09fdbc6206-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cks6r\" (UID: \"cad4d2ba-cd8b-4886-90cd-ff09fdbc6206\") " pod="openshift-marketplace/marketplace-operator-79b997595-cks6r" Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.334030 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cad4d2ba-cd8b-4886-90cd-ff09fdbc6206-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cks6r\" (UID: \"cad4d2ba-cd8b-4886-90cd-ff09fdbc6206\") " pod="openshift-marketplace/marketplace-operator-79b997595-cks6r" Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.334071 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb974\" (UniqueName: \"kubernetes.io/projected/cad4d2ba-cd8b-4886-90cd-ff09fdbc6206-kube-api-access-wb974\") pod \"marketplace-operator-79b997595-cks6r\" (UID: \"cad4d2ba-cd8b-4886-90cd-ff09fdbc6206\") " pod="openshift-marketplace/marketplace-operator-79b997595-cks6r" Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.435341 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cad4d2ba-cd8b-4886-90cd-ff09fdbc6206-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cks6r\" (UID: \"cad4d2ba-cd8b-4886-90cd-ff09fdbc6206\") " pod="openshift-marketplace/marketplace-operator-79b997595-cks6r" Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.435392 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cad4d2ba-cd8b-4886-90cd-ff09fdbc6206-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cks6r\" (UID: \"cad4d2ba-cd8b-4886-90cd-ff09fdbc6206\") " pod="openshift-marketplace/marketplace-operator-79b997595-cks6r" Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.435434 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb974\" (UniqueName: \"kubernetes.io/projected/cad4d2ba-cd8b-4886-90cd-ff09fdbc6206-kube-api-access-wb974\") pod \"marketplace-operator-79b997595-cks6r\" (UID: \"cad4d2ba-cd8b-4886-90cd-ff09fdbc6206\") " pod="openshift-marketplace/marketplace-operator-79b997595-cks6r" Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.437570 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cad4d2ba-cd8b-4886-90cd-ff09fdbc6206-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cks6r\" (UID: \"cad4d2ba-cd8b-4886-90cd-ff09fdbc6206\") " pod="openshift-marketplace/marketplace-operator-79b997595-cks6r" Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.442819 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cad4d2ba-cd8b-4886-90cd-ff09fdbc6206-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cks6r\" (UID: \"cad4d2ba-cd8b-4886-90cd-ff09fdbc6206\") " pod="openshift-marketplace/marketplace-operator-79b997595-cks6r" Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.454848 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb974\" (UniqueName: \"kubernetes.io/projected/cad4d2ba-cd8b-4886-90cd-ff09fdbc6206-kube-api-access-wb974\") pod \"marketplace-operator-79b997595-cks6r\" (UID: \"cad4d2ba-cd8b-4886-90cd-ff09fdbc6206\") " pod="openshift-marketplace/marketplace-operator-79b997595-cks6r" Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.601036 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cks6r" Dec 03 00:13:38 crc kubenswrapper[4805]: I1203 00:13:38.861759 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cks6r"] Dec 03 00:13:38 crc kubenswrapper[4805]: W1203 00:13:38.877912 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcad4d2ba_cd8b_4886_90cd_ff09fdbc6206.slice/crio-058a5e1c59648c1e4c7065279181d293801121ba717eed636c75f7391005f404 WatchSource:0}: Error finding container 058a5e1c59648c1e4c7065279181d293801121ba717eed636c75f7391005f404: Status 404 returned error can't find the container with id 058a5e1c59648c1e4c7065279181d293801121ba717eed636c75f7391005f404 Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.045606 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cks6r" event={"ID":"cad4d2ba-cd8b-4886-90cd-ff09fdbc6206","Type":"ContainerStarted","Data":"058a5e1c59648c1e4c7065279181d293801121ba717eed636c75f7391005f404"} Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.049042 4805 generic.go:334] "Generic (PLEG): container finished" podID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" containerID="105f2a7bd585ad3b1527246e03750f01c68e5f95b7e355308a5bd70efb844dc8" exitCode=0 Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.049132 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htqft" event={"ID":"c6a69f1e-7226-4f13-8dce-fcfc6c25f240","Type":"ContainerDied","Data":"105f2a7bd585ad3b1527246e03750f01c68e5f95b7e355308a5bd70efb844dc8"} Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.051593 4805 generic.go:334] "Generic (PLEG): container finished" podID="61cfa86d-9594-400f-991c-4819838ee49d" containerID="d7245fe6bd32249972d2c1d392a4f51b4f5ebf9e70be8377cc24ae3ecb550072" exitCode=0 Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.051679 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" event={"ID":"61cfa86d-9594-400f-991c-4819838ee49d","Type":"ContainerDied","Data":"d7245fe6bd32249972d2c1d392a4f51b4f5ebf9e70be8377cc24ae3ecb550072"} Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.051764 4805 scope.go:117] "RemoveContainer" containerID="978de6011fd828ff799cf647094509777fe4309c49daa9819685c7dbac9b2e74" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.057900 4805 generic.go:334] "Generic (PLEG): container finished" podID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" containerID="419cdd6923cedafe19d5a4fe6f5ebfe6bad6fd8ff807aa79ec7ee22f4b866cb8" exitCode=0 Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.058093 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtztk" event={"ID":"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33","Type":"ContainerDied","Data":"419cdd6923cedafe19d5a4fe6f5ebfe6bad6fd8ff807aa79ec7ee22f4b866cb8"} Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.145895 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtztk" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.217800 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-htqft" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.264391 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d6f0e6f-57cf-4586-8fe7-0df5145d4f33-utilities\") pod \"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33\" (UID: \"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33\") " Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.264588 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h49r9\" (UniqueName: \"kubernetes.io/projected/2d6f0e6f-57cf-4586-8fe7-0df5145d4f33-kube-api-access-h49r9\") pod \"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33\" (UID: \"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33\") " Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.264651 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d6f0e6f-57cf-4586-8fe7-0df5145d4f33-catalog-content\") pod \"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33\" (UID: \"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33\") " Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.266492 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d6f0e6f-57cf-4586-8fe7-0df5145d4f33-utilities" (OuterVolumeSpecName: "utilities") pod "2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" (UID: "2d6f0e6f-57cf-4586-8fe7-0df5145d4f33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.273854 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d6f0e6f-57cf-4586-8fe7-0df5145d4f33-kube-api-access-h49r9" (OuterVolumeSpecName: "kube-api-access-h49r9") pod "2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" (UID: "2d6f0e6f-57cf-4586-8fe7-0df5145d4f33"). InnerVolumeSpecName "kube-api-access-h49r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.279184 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.366955 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trcz8\" (UniqueName: \"kubernetes.io/projected/61cfa86d-9594-400f-991c-4819838ee49d-kube-api-access-trcz8\") pod \"61cfa86d-9594-400f-991c-4819838ee49d\" (UID: \"61cfa86d-9594-400f-991c-4819838ee49d\") " Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.367071 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfc64\" (UniqueName: \"kubernetes.io/projected/c6a69f1e-7226-4f13-8dce-fcfc6c25f240-kube-api-access-xfc64\") pod \"c6a69f1e-7226-4f13-8dce-fcfc6c25f240\" (UID: \"c6a69f1e-7226-4f13-8dce-fcfc6c25f240\") " Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.367109 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61cfa86d-9594-400f-991c-4819838ee49d-marketplace-trusted-ca\") pod \"61cfa86d-9594-400f-991c-4819838ee49d\" (UID: \"61cfa86d-9594-400f-991c-4819838ee49d\") " Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.367165 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6a69f1e-7226-4f13-8dce-fcfc6c25f240-utilities\") pod \"c6a69f1e-7226-4f13-8dce-fcfc6c25f240\" (UID: \"c6a69f1e-7226-4f13-8dce-fcfc6c25f240\") " Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.367244 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/61cfa86d-9594-400f-991c-4819838ee49d-marketplace-operator-metrics\") pod \"61cfa86d-9594-400f-991c-4819838ee49d\" (UID: \"61cfa86d-9594-400f-991c-4819838ee49d\") " Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.367343 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6a69f1e-7226-4f13-8dce-fcfc6c25f240-catalog-content\") pod \"c6a69f1e-7226-4f13-8dce-fcfc6c25f240\" (UID: \"c6a69f1e-7226-4f13-8dce-fcfc6c25f240\") " Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.368502 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61cfa86d-9594-400f-991c-4819838ee49d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "61cfa86d-9594-400f-991c-4819838ee49d" (UID: "61cfa86d-9594-400f-991c-4819838ee49d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.369099 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d6f0e6f-57cf-4586-8fe7-0df5145d4f33-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.369134 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h49r9\" (UniqueName: \"kubernetes.io/projected/2d6f0e6f-57cf-4586-8fe7-0df5145d4f33-kube-api-access-h49r9\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.369151 4805 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61cfa86d-9594-400f-991c-4819838ee49d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.370371 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6a69f1e-7226-4f13-8dce-fcfc6c25f240-utilities" (OuterVolumeSpecName: "utilities") pod "c6a69f1e-7226-4f13-8dce-fcfc6c25f240" (UID: "c6a69f1e-7226-4f13-8dce-fcfc6c25f240"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.371909 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d6f0e6f-57cf-4586-8fe7-0df5145d4f33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" (UID: "2d6f0e6f-57cf-4586-8fe7-0df5145d4f33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.372592 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6a69f1e-7226-4f13-8dce-fcfc6c25f240-kube-api-access-xfc64" (OuterVolumeSpecName: "kube-api-access-xfc64") pod "c6a69f1e-7226-4f13-8dce-fcfc6c25f240" (UID: "c6a69f1e-7226-4f13-8dce-fcfc6c25f240"). InnerVolumeSpecName "kube-api-access-xfc64". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.376254 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61cfa86d-9594-400f-991c-4819838ee49d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "61cfa86d-9594-400f-991c-4819838ee49d" (UID: "61cfa86d-9594-400f-991c-4819838ee49d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.380611 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61cfa86d-9594-400f-991c-4819838ee49d-kube-api-access-trcz8" (OuterVolumeSpecName: "kube-api-access-trcz8") pod "61cfa86d-9594-400f-991c-4819838ee49d" (UID: "61cfa86d-9594-400f-991c-4819838ee49d"). InnerVolumeSpecName "kube-api-access-trcz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.420740 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9hlqz" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.454510 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6a69f1e-7226-4f13-8dce-fcfc6c25f240-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6a69f1e-7226-4f13-8dce-fcfc6c25f240" (UID: "c6a69f1e-7226-4f13-8dce-fcfc6c25f240"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.471403 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trcz8\" (UniqueName: \"kubernetes.io/projected/61cfa86d-9594-400f-991c-4819838ee49d-kube-api-access-trcz8\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.471440 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfc64\" (UniqueName: \"kubernetes.io/projected/c6a69f1e-7226-4f13-8dce-fcfc6c25f240-kube-api-access-xfc64\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.471450 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6a69f1e-7226-4f13-8dce-fcfc6c25f240-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.471461 4805 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/61cfa86d-9594-400f-991c-4819838ee49d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.471470 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d6f0e6f-57cf-4586-8fe7-0df5145d4f33-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.471479 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6a69f1e-7226-4f13-8dce-fcfc6c25f240-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.572911 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/788046c6-ddf0-4f22-bc41-260efe363420-catalog-content\") pod \"788046c6-ddf0-4f22-bc41-260efe363420\" (UID: \"788046c6-ddf0-4f22-bc41-260efe363420\") " Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.572988 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/788046c6-ddf0-4f22-bc41-260efe363420-utilities\") pod \"788046c6-ddf0-4f22-bc41-260efe363420\" (UID: \"788046c6-ddf0-4f22-bc41-260efe363420\") " Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.573036 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql9tl\" (UniqueName: \"kubernetes.io/projected/788046c6-ddf0-4f22-bc41-260efe363420-kube-api-access-ql9tl\") pod \"788046c6-ddf0-4f22-bc41-260efe363420\" (UID: \"788046c6-ddf0-4f22-bc41-260efe363420\") " Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.575246 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/788046c6-ddf0-4f22-bc41-260efe363420-utilities" (OuterVolumeSpecName: "utilities") pod "788046c6-ddf0-4f22-bc41-260efe363420" (UID: "788046c6-ddf0-4f22-bc41-260efe363420"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.578262 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/788046c6-ddf0-4f22-bc41-260efe363420-kube-api-access-ql9tl" (OuterVolumeSpecName: "kube-api-access-ql9tl") pod "788046c6-ddf0-4f22-bc41-260efe363420" (UID: "788046c6-ddf0-4f22-bc41-260efe363420"). InnerVolumeSpecName "kube-api-access-ql9tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.592989 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/788046c6-ddf0-4f22-bc41-260efe363420-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "788046c6-ddf0-4f22-bc41-260efe363420" (UID: "788046c6-ddf0-4f22-bc41-260efe363420"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.675611 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/788046c6-ddf0-4f22-bc41-260efe363420-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.675664 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/788046c6-ddf0-4f22-bc41-260efe363420-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:39 crc kubenswrapper[4805]: I1203 00:13:39.675683 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql9tl\" (UniqueName: \"kubernetes.io/projected/788046c6-ddf0-4f22-bc41-260efe363420-kube-api-access-ql9tl\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.066914 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtztk" event={"ID":"2d6f0e6f-57cf-4586-8fe7-0df5145d4f33","Type":"ContainerDied","Data":"b2b8ae9ddb1a02220617f036b446434d8c4cce54531c9af731136623fdd36e6c"} Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.066991 4805 scope.go:117] "RemoveContainer" containerID="419cdd6923cedafe19d5a4fe6f5ebfe6bad6fd8ff807aa79ec7ee22f4b866cb8" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.066993 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtztk" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.070063 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cks6r" event={"ID":"cad4d2ba-cd8b-4886-90cd-ff09fdbc6206","Type":"ContainerStarted","Data":"353051a308f79535dd601e0e1c37ebb74952508c41839ebc22e39965b3a21aa7"} Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.070375 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cks6r" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.072699 4805 generic.go:334] "Generic (PLEG): container finished" podID="b4f086f6-ed16-47f7-a630-f480a86f4954" containerID="fc0f9bb92aa6f1bf26a573fc866f7afe42d647b17e963678eef0a2fda4a36d08" exitCode=0 Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.072791 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r647s" event={"ID":"b4f086f6-ed16-47f7-a630-f480a86f4954","Type":"ContainerDied","Data":"fc0f9bb92aa6f1bf26a573fc866f7afe42d647b17e963678eef0a2fda4a36d08"} Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.077086 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cks6r" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.077347 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htqft" event={"ID":"c6a69f1e-7226-4f13-8dce-fcfc6c25f240","Type":"ContainerDied","Data":"ced5342dab4576833d36eef04dc90c41f7a8224b9e40afc3b40344ae4277527e"} Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.077481 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-htqft" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.080421 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" event={"ID":"61cfa86d-9594-400f-991c-4819838ee49d","Type":"ContainerDied","Data":"bdff571b666ef32e8326f4d877c02e76ca52e704d83b72315fc2469c96860d12"} Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.080455 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9rm6z" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.082904 4805 generic.go:334] "Generic (PLEG): container finished" podID="788046c6-ddf0-4f22-bc41-260efe363420" containerID="7c2fdba6a3ca3b3fe8f9fc098769e9ea960cb303c585475d8babb3804503f769" exitCode=0 Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.082951 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9hlqz" event={"ID":"788046c6-ddf0-4f22-bc41-260efe363420","Type":"ContainerDied","Data":"7c2fdba6a3ca3b3fe8f9fc098769e9ea960cb303c585475d8babb3804503f769"} Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.082968 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9hlqz" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.082987 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9hlqz" event={"ID":"788046c6-ddf0-4f22-bc41-260efe363420","Type":"ContainerDied","Data":"cc26fd9683c3c1146cec388e1370f80073b66505f66784b36f1d836806c8a8b0"} Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.099907 4805 scope.go:117] "RemoveContainer" containerID="50948eedfea00e37ddb7a3e1c54f1c53ada10831b6bd2294e8be0b8f2d2d1dcd" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.120298 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cks6r" podStartSLOduration=2.120274475 podStartE2EDuration="2.120274475s" podCreationTimestamp="2025-12-03 00:13:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:13:40.093075266 +0000 UTC m=+443.942037892" watchObservedRunningTime="2025-12-03 00:13:40.120274475 +0000 UTC m=+443.969237081" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.141422 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vtztk"] Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.143604 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vtztk"] Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.152821 4805 scope.go:117] "RemoveContainer" containerID="f3c92134d9c2f0972f451a2f244f1186c987e245a163eb14b1f7031e6780982d" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.156617 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9rm6z"] Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.158100 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r647s" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.162094 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9rm6z"] Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.167814 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-htqft"] Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.171414 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-htqft"] Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.181585 4805 scope.go:117] "RemoveContainer" containerID="105f2a7bd585ad3b1527246e03750f01c68e5f95b7e355308a5bd70efb844dc8" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.183297 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9hlqz"] Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.199906 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9hlqz"] Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.210481 4805 scope.go:117] "RemoveContainer" containerID="00d83c0786c6ef8c3e6e5efdd61d9ecafc5cac0e410a459352addfa72f8dfa54" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.231782 4805 scope.go:117] "RemoveContainer" containerID="63ebcbed8204b54fe8f1bf842a6bb0cc0d3dcb23b02c5f544340e5455ff92873" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.252349 4805 scope.go:117] "RemoveContainer" containerID="d7245fe6bd32249972d2c1d392a4f51b4f5ebf9e70be8377cc24ae3ecb550072" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.272589 4805 scope.go:117] "RemoveContainer" containerID="7c2fdba6a3ca3b3fe8f9fc098769e9ea960cb303c585475d8babb3804503f769" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.283781 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scq7s\" (UniqueName: \"kubernetes.io/projected/b4f086f6-ed16-47f7-a630-f480a86f4954-kube-api-access-scq7s\") pod \"b4f086f6-ed16-47f7-a630-f480a86f4954\" (UID: \"b4f086f6-ed16-47f7-a630-f480a86f4954\") " Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.283850 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4f086f6-ed16-47f7-a630-f480a86f4954-utilities\") pod \"b4f086f6-ed16-47f7-a630-f480a86f4954\" (UID: \"b4f086f6-ed16-47f7-a630-f480a86f4954\") " Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.283930 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4f086f6-ed16-47f7-a630-f480a86f4954-catalog-content\") pod \"b4f086f6-ed16-47f7-a630-f480a86f4954\" (UID: \"b4f086f6-ed16-47f7-a630-f480a86f4954\") " Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.286218 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4f086f6-ed16-47f7-a630-f480a86f4954-utilities" (OuterVolumeSpecName: "utilities") pod "b4f086f6-ed16-47f7-a630-f480a86f4954" (UID: "b4f086f6-ed16-47f7-a630-f480a86f4954"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.288497 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4f086f6-ed16-47f7-a630-f480a86f4954-kube-api-access-scq7s" (OuterVolumeSpecName: "kube-api-access-scq7s") pod "b4f086f6-ed16-47f7-a630-f480a86f4954" (UID: "b4f086f6-ed16-47f7-a630-f480a86f4954"). InnerVolumeSpecName "kube-api-access-scq7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.291389 4805 scope.go:117] "RemoveContainer" containerID="0c693b68e607788327a71e0821793ea8977ba7dc8c1dd7653dc1dad06fa03416" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.319631 4805 scope.go:117] "RemoveContainer" containerID="ea3e2afc7993188710d7236da5b2e4eb361ee7b37abd505560fe0f2e825379cf" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.343562 4805 scope.go:117] "RemoveContainer" containerID="7c2fdba6a3ca3b3fe8f9fc098769e9ea960cb303c585475d8babb3804503f769" Dec 03 00:13:40 crc kubenswrapper[4805]: E1203 00:13:40.344274 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c2fdba6a3ca3b3fe8f9fc098769e9ea960cb303c585475d8babb3804503f769\": container with ID starting with 7c2fdba6a3ca3b3fe8f9fc098769e9ea960cb303c585475d8babb3804503f769 not found: ID does not exist" containerID="7c2fdba6a3ca3b3fe8f9fc098769e9ea960cb303c585475d8babb3804503f769" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.344393 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c2fdba6a3ca3b3fe8f9fc098769e9ea960cb303c585475d8babb3804503f769"} err="failed to get container status \"7c2fdba6a3ca3b3fe8f9fc098769e9ea960cb303c585475d8babb3804503f769\": rpc error: code = NotFound desc = could not find container \"7c2fdba6a3ca3b3fe8f9fc098769e9ea960cb303c585475d8babb3804503f769\": container with ID starting with 7c2fdba6a3ca3b3fe8f9fc098769e9ea960cb303c585475d8babb3804503f769 not found: ID does not exist" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.344456 4805 scope.go:117] "RemoveContainer" containerID="0c693b68e607788327a71e0821793ea8977ba7dc8c1dd7653dc1dad06fa03416" Dec 03 00:13:40 crc kubenswrapper[4805]: E1203 00:13:40.345184 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c693b68e607788327a71e0821793ea8977ba7dc8c1dd7653dc1dad06fa03416\": container with ID starting with 0c693b68e607788327a71e0821793ea8977ba7dc8c1dd7653dc1dad06fa03416 not found: ID does not exist" containerID="0c693b68e607788327a71e0821793ea8977ba7dc8c1dd7653dc1dad06fa03416" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.345266 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c693b68e607788327a71e0821793ea8977ba7dc8c1dd7653dc1dad06fa03416"} err="failed to get container status \"0c693b68e607788327a71e0821793ea8977ba7dc8c1dd7653dc1dad06fa03416\": rpc error: code = NotFound desc = could not find container \"0c693b68e607788327a71e0821793ea8977ba7dc8c1dd7653dc1dad06fa03416\": container with ID starting with 0c693b68e607788327a71e0821793ea8977ba7dc8c1dd7653dc1dad06fa03416 not found: ID does not exist" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.345306 4805 scope.go:117] "RemoveContainer" containerID="ea3e2afc7993188710d7236da5b2e4eb361ee7b37abd505560fe0f2e825379cf" Dec 03 00:13:40 crc kubenswrapper[4805]: E1203 00:13:40.345905 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea3e2afc7993188710d7236da5b2e4eb361ee7b37abd505560fe0f2e825379cf\": container with ID starting with ea3e2afc7993188710d7236da5b2e4eb361ee7b37abd505560fe0f2e825379cf not found: ID does not exist" containerID="ea3e2afc7993188710d7236da5b2e4eb361ee7b37abd505560fe0f2e825379cf" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.345947 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea3e2afc7993188710d7236da5b2e4eb361ee7b37abd505560fe0f2e825379cf"} err="failed to get container status \"ea3e2afc7993188710d7236da5b2e4eb361ee7b37abd505560fe0f2e825379cf\": rpc error: code = NotFound desc = could not find container \"ea3e2afc7993188710d7236da5b2e4eb361ee7b37abd505560fe0f2e825379cf\": container with ID starting with ea3e2afc7993188710d7236da5b2e4eb361ee7b37abd505560fe0f2e825379cf not found: ID does not exist" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.386090 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scq7s\" (UniqueName: \"kubernetes.io/projected/b4f086f6-ed16-47f7-a630-f480a86f4954-kube-api-access-scq7s\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.386147 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4f086f6-ed16-47f7-a630-f480a86f4954-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.406654 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4f086f6-ed16-47f7-a630-f480a86f4954-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4f086f6-ed16-47f7-a630-f480a86f4954" (UID: "b4f086f6-ed16-47f7-a630-f480a86f4954"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.431415 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" path="/var/lib/kubelet/pods/2d6f0e6f-57cf-4586-8fe7-0df5145d4f33/volumes" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.432238 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61cfa86d-9594-400f-991c-4819838ee49d" path="/var/lib/kubelet/pods/61cfa86d-9594-400f-991c-4819838ee49d/volumes" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.432786 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="788046c6-ddf0-4f22-bc41-260efe363420" path="/var/lib/kubelet/pods/788046c6-ddf0-4f22-bc41-260efe363420/volumes" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.434358 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" path="/var/lib/kubelet/pods/c6a69f1e-7226-4f13-8dce-fcfc6c25f240/volumes" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.487523 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4f086f6-ed16-47f7-a630-f480a86f4954-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.613371 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d4vns"] Dec 03 00:13:40 crc kubenswrapper[4805]: E1203 00:13:40.613625 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4f086f6-ed16-47f7-a630-f480a86f4954" containerName="registry-server" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.613638 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4f086f6-ed16-47f7-a630-f480a86f4954" containerName="registry-server" Dec 03 00:13:40 crc kubenswrapper[4805]: E1203 00:13:40.613651 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cfa86d-9594-400f-991c-4819838ee49d" containerName="marketplace-operator" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.613657 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cfa86d-9594-400f-991c-4819838ee49d" containerName="marketplace-operator" Dec 03 00:13:40 crc kubenswrapper[4805]: E1203 00:13:40.613664 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788046c6-ddf0-4f22-bc41-260efe363420" containerName="registry-server" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.613671 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="788046c6-ddf0-4f22-bc41-260efe363420" containerName="registry-server" Dec 03 00:13:40 crc kubenswrapper[4805]: E1203 00:13:40.613684 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" containerName="extract-utilities" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.613690 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" containerName="extract-utilities" Dec 03 00:13:40 crc kubenswrapper[4805]: E1203 00:13:40.613696 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" containerName="registry-server" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.613701 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" containerName="registry-server" Dec 03 00:13:40 crc kubenswrapper[4805]: E1203 00:13:40.613710 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788046c6-ddf0-4f22-bc41-260efe363420" containerName="extract-content" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.613716 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="788046c6-ddf0-4f22-bc41-260efe363420" containerName="extract-content" Dec 03 00:13:40 crc kubenswrapper[4805]: E1203 00:13:40.613724 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4f086f6-ed16-47f7-a630-f480a86f4954" containerName="extract-utilities" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.613730 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4f086f6-ed16-47f7-a630-f480a86f4954" containerName="extract-utilities" Dec 03 00:13:40 crc kubenswrapper[4805]: E1203 00:13:40.613737 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" containerName="extract-utilities" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.613743 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" containerName="extract-utilities" Dec 03 00:13:40 crc kubenswrapper[4805]: E1203 00:13:40.613752 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4f086f6-ed16-47f7-a630-f480a86f4954" containerName="extract-content" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.613758 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4f086f6-ed16-47f7-a630-f480a86f4954" containerName="extract-content" Dec 03 00:13:40 crc kubenswrapper[4805]: E1203 00:13:40.613767 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" containerName="extract-content" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.613772 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" containerName="extract-content" Dec 03 00:13:40 crc kubenswrapper[4805]: E1203 00:13:40.613780 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" containerName="extract-content" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.613785 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" containerName="extract-content" Dec 03 00:13:40 crc kubenswrapper[4805]: E1203 00:13:40.613791 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788046c6-ddf0-4f22-bc41-260efe363420" containerName="extract-utilities" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.613797 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="788046c6-ddf0-4f22-bc41-260efe363420" containerName="extract-utilities" Dec 03 00:13:40 crc kubenswrapper[4805]: E1203 00:13:40.613806 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" containerName="registry-server" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.613812 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" containerName="registry-server" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.613895 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d6f0e6f-57cf-4586-8fe7-0df5145d4f33" containerName="registry-server" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.613904 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="788046c6-ddf0-4f22-bc41-260efe363420" containerName="registry-server" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.613914 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4f086f6-ed16-47f7-a630-f480a86f4954" containerName="registry-server" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.613922 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a69f1e-7226-4f13-8dce-fcfc6c25f240" containerName="registry-server" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.613930 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="61cfa86d-9594-400f-991c-4819838ee49d" containerName="marketplace-operator" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.613937 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="61cfa86d-9594-400f-991c-4819838ee49d" containerName="marketplace-operator" Dec 03 00:13:40 crc kubenswrapper[4805]: E1203 00:13:40.614014 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cfa86d-9594-400f-991c-4819838ee49d" containerName="marketplace-operator" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.614021 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cfa86d-9594-400f-991c-4819838ee49d" containerName="marketplace-operator" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.614711 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d4vns" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.617173 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.634795 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d4vns"] Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.691605 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f34256d-c41f-4c52-8d34-5574c5b3e862-utilities\") pod \"redhat-marketplace-d4vns\" (UID: \"8f34256d-c41f-4c52-8d34-5574c5b3e862\") " pod="openshift-marketplace/redhat-marketplace-d4vns" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.692182 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f34256d-c41f-4c52-8d34-5574c5b3e862-catalog-content\") pod \"redhat-marketplace-d4vns\" (UID: \"8f34256d-c41f-4c52-8d34-5574c5b3e862\") " pod="openshift-marketplace/redhat-marketplace-d4vns" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.692288 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whnxg\" (UniqueName: \"kubernetes.io/projected/8f34256d-c41f-4c52-8d34-5574c5b3e862-kube-api-access-whnxg\") pod \"redhat-marketplace-d4vns\" (UID: \"8f34256d-c41f-4c52-8d34-5574c5b3e862\") " pod="openshift-marketplace/redhat-marketplace-d4vns" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.794412 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f34256d-c41f-4c52-8d34-5574c5b3e862-catalog-content\") pod \"redhat-marketplace-d4vns\" (UID: \"8f34256d-c41f-4c52-8d34-5574c5b3e862\") " pod="openshift-marketplace/redhat-marketplace-d4vns" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.794536 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whnxg\" (UniqueName: \"kubernetes.io/projected/8f34256d-c41f-4c52-8d34-5574c5b3e862-kube-api-access-whnxg\") pod \"redhat-marketplace-d4vns\" (UID: \"8f34256d-c41f-4c52-8d34-5574c5b3e862\") " pod="openshift-marketplace/redhat-marketplace-d4vns" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.794635 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f34256d-c41f-4c52-8d34-5574c5b3e862-utilities\") pod \"redhat-marketplace-d4vns\" (UID: \"8f34256d-c41f-4c52-8d34-5574c5b3e862\") " pod="openshift-marketplace/redhat-marketplace-d4vns" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.795940 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f34256d-c41f-4c52-8d34-5574c5b3e862-utilities\") pod \"redhat-marketplace-d4vns\" (UID: \"8f34256d-c41f-4c52-8d34-5574c5b3e862\") " pod="openshift-marketplace/redhat-marketplace-d4vns" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.797069 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f34256d-c41f-4c52-8d34-5574c5b3e862-catalog-content\") pod \"redhat-marketplace-d4vns\" (UID: \"8f34256d-c41f-4c52-8d34-5574c5b3e862\") " pod="openshift-marketplace/redhat-marketplace-d4vns" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.816226 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whnxg\" (UniqueName: \"kubernetes.io/projected/8f34256d-c41f-4c52-8d34-5574c5b3e862-kube-api-access-whnxg\") pod \"redhat-marketplace-d4vns\" (UID: \"8f34256d-c41f-4c52-8d34-5574c5b3e862\") " pod="openshift-marketplace/redhat-marketplace-d4vns" Dec 03 00:13:40 crc kubenswrapper[4805]: I1203 00:13:40.949184 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d4vns" Dec 03 00:13:41 crc kubenswrapper[4805]: I1203 00:13:41.102306 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r647s" event={"ID":"b4f086f6-ed16-47f7-a630-f480a86f4954","Type":"ContainerDied","Data":"bb5bfad7070a40d0c0b79aa3c419ed5d7597218312105a8be35284bd80070783"} Dec 03 00:13:41 crc kubenswrapper[4805]: I1203 00:13:41.102384 4805 scope.go:117] "RemoveContainer" containerID="fc0f9bb92aa6f1bf26a573fc866f7afe42d647b17e963678eef0a2fda4a36d08" Dec 03 00:13:41 crc kubenswrapper[4805]: I1203 00:13:41.102569 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r647s" Dec 03 00:13:41 crc kubenswrapper[4805]: I1203 00:13:41.136208 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r647s"] Dec 03 00:13:41 crc kubenswrapper[4805]: I1203 00:13:41.148043 4805 scope.go:117] "RemoveContainer" containerID="835498e3877620b59b67c40be8711db6661b59174a027c2daea8c17c4a9dd12c" Dec 03 00:13:41 crc kubenswrapper[4805]: I1203 00:13:41.153881 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r647s"] Dec 03 00:13:41 crc kubenswrapper[4805]: I1203 00:13:41.168090 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d4vns"] Dec 03 00:13:41 crc kubenswrapper[4805]: I1203 00:13:41.174241 4805 scope.go:117] "RemoveContainer" containerID="217b9d9d0a4c028d10e4b0340685fb0495175d1d69790ba1cff4ad3796c27fc5" Dec 03 00:13:41 crc kubenswrapper[4805]: W1203 00:13:41.177897 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f34256d_c41f_4c52_8d34_5574c5b3e862.slice/crio-c6c62f3569a05f19a35efd809fc024d3baed39c5f809028e32c14a71303a6541 WatchSource:0}: Error finding container c6c62f3569a05f19a35efd809fc024d3baed39c5f809028e32c14a71303a6541: Status 404 returned error can't find the container with id c6c62f3569a05f19a35efd809fc024d3baed39c5f809028e32c14a71303a6541 Dec 03 00:13:42 crc kubenswrapper[4805]: I1203 00:13:42.142003 4805 generic.go:334] "Generic (PLEG): container finished" podID="8f34256d-c41f-4c52-8d34-5574c5b3e862" containerID="5a37407b19705537558c6fa967e93b605930118ac265a4e0fa13cb3828430ede" exitCode=0 Dec 03 00:13:42 crc kubenswrapper[4805]: I1203 00:13:42.142241 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4vns" event={"ID":"8f34256d-c41f-4c52-8d34-5574c5b3e862","Type":"ContainerDied","Data":"5a37407b19705537558c6fa967e93b605930118ac265a4e0fa13cb3828430ede"} Dec 03 00:13:42 crc kubenswrapper[4805]: I1203 00:13:42.144335 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4vns" event={"ID":"8f34256d-c41f-4c52-8d34-5574c5b3e862","Type":"ContainerStarted","Data":"c6c62f3569a05f19a35efd809fc024d3baed39c5f809028e32c14a71303a6541"} Dec 03 00:13:42 crc kubenswrapper[4805]: I1203 00:13:42.431414 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4f086f6-ed16-47f7-a630-f480a86f4954" path="/var/lib/kubelet/pods/b4f086f6-ed16-47f7-a630-f480a86f4954/volumes" Dec 03 00:13:42 crc kubenswrapper[4805]: I1203 00:13:42.610769 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hsx6d"] Dec 03 00:13:42 crc kubenswrapper[4805]: I1203 00:13:42.611979 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hsx6d" Dec 03 00:13:42 crc kubenswrapper[4805]: I1203 00:13:42.617499 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 03 00:13:42 crc kubenswrapper[4805]: I1203 00:13:42.619105 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hsx6d"] Dec 03 00:13:42 crc kubenswrapper[4805]: I1203 00:13:42.733386 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90bcdd7-2155-4f6f-bad9-19cea6e78c63-utilities\") pod \"redhat-operators-hsx6d\" (UID: \"b90bcdd7-2155-4f6f-bad9-19cea6e78c63\") " pod="openshift-marketplace/redhat-operators-hsx6d" Dec 03 00:13:42 crc kubenswrapper[4805]: I1203 00:13:42.733504 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90bcdd7-2155-4f6f-bad9-19cea6e78c63-catalog-content\") pod \"redhat-operators-hsx6d\" (UID: \"b90bcdd7-2155-4f6f-bad9-19cea6e78c63\") " pod="openshift-marketplace/redhat-operators-hsx6d" Dec 03 00:13:42 crc kubenswrapper[4805]: I1203 00:13:42.733821 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv9q9\" (UniqueName: \"kubernetes.io/projected/b90bcdd7-2155-4f6f-bad9-19cea6e78c63-kube-api-access-wv9q9\") pod \"redhat-operators-hsx6d\" (UID: \"b90bcdd7-2155-4f6f-bad9-19cea6e78c63\") " pod="openshift-marketplace/redhat-operators-hsx6d" Dec 03 00:13:42 crc kubenswrapper[4805]: I1203 00:13:42.835487 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv9q9\" (UniqueName: \"kubernetes.io/projected/b90bcdd7-2155-4f6f-bad9-19cea6e78c63-kube-api-access-wv9q9\") pod \"redhat-operators-hsx6d\" (UID: \"b90bcdd7-2155-4f6f-bad9-19cea6e78c63\") " pod="openshift-marketplace/redhat-operators-hsx6d" Dec 03 00:13:42 crc kubenswrapper[4805]: I1203 00:13:42.835840 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90bcdd7-2155-4f6f-bad9-19cea6e78c63-utilities\") pod \"redhat-operators-hsx6d\" (UID: \"b90bcdd7-2155-4f6f-bad9-19cea6e78c63\") " pod="openshift-marketplace/redhat-operators-hsx6d" Dec 03 00:13:42 crc kubenswrapper[4805]: I1203 00:13:42.835915 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90bcdd7-2155-4f6f-bad9-19cea6e78c63-catalog-content\") pod \"redhat-operators-hsx6d\" (UID: \"b90bcdd7-2155-4f6f-bad9-19cea6e78c63\") " pod="openshift-marketplace/redhat-operators-hsx6d" Dec 03 00:13:42 crc kubenswrapper[4805]: I1203 00:13:42.836562 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b90bcdd7-2155-4f6f-bad9-19cea6e78c63-catalog-content\") pod \"redhat-operators-hsx6d\" (UID: \"b90bcdd7-2155-4f6f-bad9-19cea6e78c63\") " pod="openshift-marketplace/redhat-operators-hsx6d" Dec 03 00:13:42 crc kubenswrapper[4805]: I1203 00:13:42.836707 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b90bcdd7-2155-4f6f-bad9-19cea6e78c63-utilities\") pod \"redhat-operators-hsx6d\" (UID: \"b90bcdd7-2155-4f6f-bad9-19cea6e78c63\") " pod="openshift-marketplace/redhat-operators-hsx6d" Dec 03 00:13:42 crc kubenswrapper[4805]: I1203 00:13:42.862185 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv9q9\" (UniqueName: \"kubernetes.io/projected/b90bcdd7-2155-4f6f-bad9-19cea6e78c63-kube-api-access-wv9q9\") pod \"redhat-operators-hsx6d\" (UID: \"b90bcdd7-2155-4f6f-bad9-19cea6e78c63\") " pod="openshift-marketplace/redhat-operators-hsx6d" Dec 03 00:13:42 crc kubenswrapper[4805]: I1203 00:13:42.962403 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hsx6d" Dec 03 00:13:43 crc kubenswrapper[4805]: I1203 00:13:43.157276 4805 generic.go:334] "Generic (PLEG): container finished" podID="8f34256d-c41f-4c52-8d34-5574c5b3e862" containerID="3e2cc5362bf4094905d26eb25d1cf9def9553098f9f524e38cb8093330610f88" exitCode=0 Dec 03 00:13:43 crc kubenswrapper[4805]: I1203 00:13:43.158550 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4vns" event={"ID":"8f34256d-c41f-4c52-8d34-5574c5b3e862","Type":"ContainerDied","Data":"3e2cc5362bf4094905d26eb25d1cf9def9553098f9f524e38cb8093330610f88"} Dec 03 00:13:43 crc kubenswrapper[4805]: I1203 00:13:43.208573 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wcfkh"] Dec 03 00:13:43 crc kubenswrapper[4805]: I1203 00:13:43.210144 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wcfkh" Dec 03 00:13:43 crc kubenswrapper[4805]: I1203 00:13:43.212126 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 03 00:13:43 crc kubenswrapper[4805]: I1203 00:13:43.216733 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wcfkh"] Dec 03 00:13:43 crc kubenswrapper[4805]: I1203 00:13:43.343963 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gt2r\" (UniqueName: \"kubernetes.io/projected/d07c760a-7a6a-48a8-aec2-4beb15f31c70-kube-api-access-7gt2r\") pod \"community-operators-wcfkh\" (UID: \"d07c760a-7a6a-48a8-aec2-4beb15f31c70\") " pod="openshift-marketplace/community-operators-wcfkh" Dec 03 00:13:43 crc kubenswrapper[4805]: I1203 00:13:43.344037 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d07c760a-7a6a-48a8-aec2-4beb15f31c70-catalog-content\") pod \"community-operators-wcfkh\" (UID: \"d07c760a-7a6a-48a8-aec2-4beb15f31c70\") " pod="openshift-marketplace/community-operators-wcfkh" Dec 03 00:13:43 crc kubenswrapper[4805]: I1203 00:13:43.344089 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d07c760a-7a6a-48a8-aec2-4beb15f31c70-utilities\") pod \"community-operators-wcfkh\" (UID: \"d07c760a-7a6a-48a8-aec2-4beb15f31c70\") " pod="openshift-marketplace/community-operators-wcfkh" Dec 03 00:13:43 crc kubenswrapper[4805]: I1203 00:13:43.367542 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hsx6d"] Dec 03 00:13:43 crc kubenswrapper[4805]: W1203 00:13:43.377668 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb90bcdd7_2155_4f6f_bad9_19cea6e78c63.slice/crio-2735fde8d6520dd30f9618f4d848a671f4d127ac1b8722a5cd7f408ebad9db3e WatchSource:0}: Error finding container 2735fde8d6520dd30f9618f4d848a671f4d127ac1b8722a5cd7f408ebad9db3e: Status 404 returned error can't find the container with id 2735fde8d6520dd30f9618f4d848a671f4d127ac1b8722a5cd7f408ebad9db3e Dec 03 00:13:43 crc kubenswrapper[4805]: I1203 00:13:43.445862 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gt2r\" (UniqueName: \"kubernetes.io/projected/d07c760a-7a6a-48a8-aec2-4beb15f31c70-kube-api-access-7gt2r\") pod \"community-operators-wcfkh\" (UID: \"d07c760a-7a6a-48a8-aec2-4beb15f31c70\") " pod="openshift-marketplace/community-operators-wcfkh" Dec 03 00:13:43 crc kubenswrapper[4805]: I1203 00:13:43.445940 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d07c760a-7a6a-48a8-aec2-4beb15f31c70-catalog-content\") pod \"community-operators-wcfkh\" (UID: \"d07c760a-7a6a-48a8-aec2-4beb15f31c70\") " pod="openshift-marketplace/community-operators-wcfkh" Dec 03 00:13:43 crc kubenswrapper[4805]: I1203 00:13:43.445992 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d07c760a-7a6a-48a8-aec2-4beb15f31c70-utilities\") pod \"community-operators-wcfkh\" (UID: \"d07c760a-7a6a-48a8-aec2-4beb15f31c70\") " pod="openshift-marketplace/community-operators-wcfkh" Dec 03 00:13:43 crc kubenswrapper[4805]: I1203 00:13:43.446650 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d07c760a-7a6a-48a8-aec2-4beb15f31c70-catalog-content\") pod \"community-operators-wcfkh\" (UID: \"d07c760a-7a6a-48a8-aec2-4beb15f31c70\") " pod="openshift-marketplace/community-operators-wcfkh" Dec 03 00:13:43 crc kubenswrapper[4805]: I1203 00:13:43.446737 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d07c760a-7a6a-48a8-aec2-4beb15f31c70-utilities\") pod \"community-operators-wcfkh\" (UID: \"d07c760a-7a6a-48a8-aec2-4beb15f31c70\") " pod="openshift-marketplace/community-operators-wcfkh" Dec 03 00:13:43 crc kubenswrapper[4805]: I1203 00:13:43.472304 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gt2r\" (UniqueName: \"kubernetes.io/projected/d07c760a-7a6a-48a8-aec2-4beb15f31c70-kube-api-access-7gt2r\") pod \"community-operators-wcfkh\" (UID: \"d07c760a-7a6a-48a8-aec2-4beb15f31c70\") " pod="openshift-marketplace/community-operators-wcfkh" Dec 03 00:13:43 crc kubenswrapper[4805]: I1203 00:13:43.533038 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wcfkh" Dec 03 00:13:43 crc kubenswrapper[4805]: I1203 00:13:43.771243 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wcfkh"] Dec 03 00:13:44 crc kubenswrapper[4805]: I1203 00:13:44.166190 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4vns" event={"ID":"8f34256d-c41f-4c52-8d34-5574c5b3e862","Type":"ContainerStarted","Data":"c2a32f7d69bf19263bbfa7d50d03e7a139487c77036721c44069d259be8bd4ba"} Dec 03 00:13:44 crc kubenswrapper[4805]: I1203 00:13:44.167469 4805 generic.go:334] "Generic (PLEG): container finished" podID="b90bcdd7-2155-4f6f-bad9-19cea6e78c63" containerID="3e5fd08fc8bf1c1e61a969fe92fe8cf39b8ea955b47e4ff277ffc1759a319e7e" exitCode=0 Dec 03 00:13:44 crc kubenswrapper[4805]: I1203 00:13:44.167530 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsx6d" event={"ID":"b90bcdd7-2155-4f6f-bad9-19cea6e78c63","Type":"ContainerDied","Data":"3e5fd08fc8bf1c1e61a969fe92fe8cf39b8ea955b47e4ff277ffc1759a319e7e"} Dec 03 00:13:44 crc kubenswrapper[4805]: I1203 00:13:44.167557 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsx6d" event={"ID":"b90bcdd7-2155-4f6f-bad9-19cea6e78c63","Type":"ContainerStarted","Data":"2735fde8d6520dd30f9618f4d848a671f4d127ac1b8722a5cd7f408ebad9db3e"} Dec 03 00:13:44 crc kubenswrapper[4805]: I1203 00:13:44.169427 4805 generic.go:334] "Generic (PLEG): container finished" podID="d07c760a-7a6a-48a8-aec2-4beb15f31c70" containerID="c9ef7fa7be37fd39bf7402c0b5208c3caf4e313a047eee30868d25181b430d86" exitCode=0 Dec 03 00:13:44 crc kubenswrapper[4805]: I1203 00:13:44.169501 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcfkh" event={"ID":"d07c760a-7a6a-48a8-aec2-4beb15f31c70","Type":"ContainerDied","Data":"c9ef7fa7be37fd39bf7402c0b5208c3caf4e313a047eee30868d25181b430d86"} Dec 03 00:13:44 crc kubenswrapper[4805]: I1203 00:13:44.169524 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcfkh" event={"ID":"d07c760a-7a6a-48a8-aec2-4beb15f31c70","Type":"ContainerStarted","Data":"1a4fb9b75a0f8bac927dabbbc9c95d9bb834bc31b5b397514e2ab7590431079a"} Dec 03 00:13:44 crc kubenswrapper[4805]: I1203 00:13:44.187510 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d4vns" podStartSLOduration=2.753531428 podStartE2EDuration="4.187486304s" podCreationTimestamp="2025-12-03 00:13:40 +0000 UTC" firstStartedPulling="2025-12-03 00:13:42.144698917 +0000 UTC m=+445.993661523" lastFinishedPulling="2025-12-03 00:13:43.578653793 +0000 UTC m=+447.427616399" observedRunningTime="2025-12-03 00:13:44.185331898 +0000 UTC m=+448.034294514" watchObservedRunningTime="2025-12-03 00:13:44.187486304 +0000 UTC m=+448.036448910" Dec 03 00:13:45 crc kubenswrapper[4805]: I1203 00:13:45.012560 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sn25s"] Dec 03 00:13:45 crc kubenswrapper[4805]: I1203 00:13:45.014376 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sn25s" Dec 03 00:13:45 crc kubenswrapper[4805]: I1203 00:13:45.016479 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 03 00:13:45 crc kubenswrapper[4805]: I1203 00:13:45.026445 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sn25s"] Dec 03 00:13:45 crc kubenswrapper[4805]: I1203 00:13:45.171079 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/566e3605-01a5-487e-9152-a9de0f1aa9e7-catalog-content\") pod \"certified-operators-sn25s\" (UID: \"566e3605-01a5-487e-9152-a9de0f1aa9e7\") " pod="openshift-marketplace/certified-operators-sn25s" Dec 03 00:13:45 crc kubenswrapper[4805]: I1203 00:13:45.171537 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/566e3605-01a5-487e-9152-a9de0f1aa9e7-utilities\") pod \"certified-operators-sn25s\" (UID: \"566e3605-01a5-487e-9152-a9de0f1aa9e7\") " pod="openshift-marketplace/certified-operators-sn25s" Dec 03 00:13:45 crc kubenswrapper[4805]: I1203 00:13:45.173033 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwqcr\" (UniqueName: \"kubernetes.io/projected/566e3605-01a5-487e-9152-a9de0f1aa9e7-kube-api-access-pwqcr\") pod \"certified-operators-sn25s\" (UID: \"566e3605-01a5-487e-9152-a9de0f1aa9e7\") " pod="openshift-marketplace/certified-operators-sn25s" Dec 03 00:13:45 crc kubenswrapper[4805]: I1203 00:13:45.192846 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsx6d" event={"ID":"b90bcdd7-2155-4f6f-bad9-19cea6e78c63","Type":"ContainerStarted","Data":"9b80de60826e0e8101905eb55ef7d9b0153fbf9dd3e941e670743adffa8a69b5"} Dec 03 00:13:45 crc kubenswrapper[4805]: I1203 00:13:45.199007 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcfkh" event={"ID":"d07c760a-7a6a-48a8-aec2-4beb15f31c70","Type":"ContainerStarted","Data":"7b839f293f0f3e446270f957c4d736d128e8b4fee8b561873070b9b569c9656c"} Dec 03 00:13:45 crc kubenswrapper[4805]: I1203 00:13:45.276461 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/566e3605-01a5-487e-9152-a9de0f1aa9e7-catalog-content\") pod \"certified-operators-sn25s\" (UID: \"566e3605-01a5-487e-9152-a9de0f1aa9e7\") " pod="openshift-marketplace/certified-operators-sn25s" Dec 03 00:13:45 crc kubenswrapper[4805]: I1203 00:13:45.276588 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/566e3605-01a5-487e-9152-a9de0f1aa9e7-utilities\") pod \"certified-operators-sn25s\" (UID: \"566e3605-01a5-487e-9152-a9de0f1aa9e7\") " pod="openshift-marketplace/certified-operators-sn25s" Dec 03 00:13:45 crc kubenswrapper[4805]: I1203 00:13:45.276649 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwqcr\" (UniqueName: \"kubernetes.io/projected/566e3605-01a5-487e-9152-a9de0f1aa9e7-kube-api-access-pwqcr\") pod \"certified-operators-sn25s\" (UID: \"566e3605-01a5-487e-9152-a9de0f1aa9e7\") " pod="openshift-marketplace/certified-operators-sn25s" Dec 03 00:13:45 crc kubenswrapper[4805]: I1203 00:13:45.277472 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/566e3605-01a5-487e-9152-a9de0f1aa9e7-utilities\") pod \"certified-operators-sn25s\" (UID: \"566e3605-01a5-487e-9152-a9de0f1aa9e7\") " pod="openshift-marketplace/certified-operators-sn25s" Dec 03 00:13:45 crc kubenswrapper[4805]: I1203 00:13:45.277563 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/566e3605-01a5-487e-9152-a9de0f1aa9e7-catalog-content\") pod \"certified-operators-sn25s\" (UID: \"566e3605-01a5-487e-9152-a9de0f1aa9e7\") " pod="openshift-marketplace/certified-operators-sn25s" Dec 03 00:13:45 crc kubenswrapper[4805]: I1203 00:13:45.298105 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwqcr\" (UniqueName: \"kubernetes.io/projected/566e3605-01a5-487e-9152-a9de0f1aa9e7-kube-api-access-pwqcr\") pod \"certified-operators-sn25s\" (UID: \"566e3605-01a5-487e-9152-a9de0f1aa9e7\") " pod="openshift-marketplace/certified-operators-sn25s" Dec 03 00:13:45 crc kubenswrapper[4805]: I1203 00:13:45.329255 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sn25s" Dec 03 00:13:45 crc kubenswrapper[4805]: I1203 00:13:45.580943 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sn25s"] Dec 03 00:13:46 crc kubenswrapper[4805]: I1203 00:13:46.208460 4805 generic.go:334] "Generic (PLEG): container finished" podID="566e3605-01a5-487e-9152-a9de0f1aa9e7" containerID="e618b143f0b316e7f6276708f3f9c0337e2ee3d767c94eb7c93eea2b79e580dc" exitCode=0 Dec 03 00:13:46 crc kubenswrapper[4805]: I1203 00:13:46.208582 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sn25s" event={"ID":"566e3605-01a5-487e-9152-a9de0f1aa9e7","Type":"ContainerDied","Data":"e618b143f0b316e7f6276708f3f9c0337e2ee3d767c94eb7c93eea2b79e580dc"} Dec 03 00:13:46 crc kubenswrapper[4805]: I1203 00:13:46.209237 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sn25s" event={"ID":"566e3605-01a5-487e-9152-a9de0f1aa9e7","Type":"ContainerStarted","Data":"4103c800271244c04f10f85e7d085b9a94d0cb7b515f81c606e350b1110cf44c"} Dec 03 00:13:46 crc kubenswrapper[4805]: I1203 00:13:46.211867 4805 generic.go:334] "Generic (PLEG): container finished" podID="b90bcdd7-2155-4f6f-bad9-19cea6e78c63" containerID="9b80de60826e0e8101905eb55ef7d9b0153fbf9dd3e941e670743adffa8a69b5" exitCode=0 Dec 03 00:13:46 crc kubenswrapper[4805]: I1203 00:13:46.211993 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsx6d" event={"ID":"b90bcdd7-2155-4f6f-bad9-19cea6e78c63","Type":"ContainerDied","Data":"9b80de60826e0e8101905eb55ef7d9b0153fbf9dd3e941e670743adffa8a69b5"} Dec 03 00:13:46 crc kubenswrapper[4805]: I1203 00:13:46.214786 4805 generic.go:334] "Generic (PLEG): container finished" podID="d07c760a-7a6a-48a8-aec2-4beb15f31c70" containerID="7b839f293f0f3e446270f957c4d736d128e8b4fee8b561873070b9b569c9656c" exitCode=0 Dec 03 00:13:46 crc kubenswrapper[4805]: I1203 00:13:46.214821 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcfkh" event={"ID":"d07c760a-7a6a-48a8-aec2-4beb15f31c70","Type":"ContainerDied","Data":"7b839f293f0f3e446270f957c4d736d128e8b4fee8b561873070b9b569c9656c"} Dec 03 00:13:47 crc kubenswrapper[4805]: I1203 00:13:47.225259 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hsx6d" event={"ID":"b90bcdd7-2155-4f6f-bad9-19cea6e78c63","Type":"ContainerStarted","Data":"b71d7ebb972598b6188815e7596c6e9240a114f508ca686ea520bd053c4854c9"} Dec 03 00:13:47 crc kubenswrapper[4805]: I1203 00:13:47.227777 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcfkh" event={"ID":"d07c760a-7a6a-48a8-aec2-4beb15f31c70","Type":"ContainerStarted","Data":"57774f89fb30b463bad54aa714887c9ccab7da626b3babe438db53e1c028ced6"} Dec 03 00:13:47 crc kubenswrapper[4805]: I1203 00:13:47.229702 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sn25s" event={"ID":"566e3605-01a5-487e-9152-a9de0f1aa9e7","Type":"ContainerStarted","Data":"f92a7272add05bf8f0c3f79740d20148ea9ef235a8bbfaf621267be5a12d0f06"} Dec 03 00:13:47 crc kubenswrapper[4805]: I1203 00:13:47.252474 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hsx6d" podStartSLOduration=2.574680452 podStartE2EDuration="5.252450181s" podCreationTimestamp="2025-12-03 00:13:42 +0000 UTC" firstStartedPulling="2025-12-03 00:13:44.169977312 +0000 UTC m=+448.018939928" lastFinishedPulling="2025-12-03 00:13:46.847747041 +0000 UTC m=+450.696709657" observedRunningTime="2025-12-03 00:13:47.249424791 +0000 UTC m=+451.098387417" watchObservedRunningTime="2025-12-03 00:13:47.252450181 +0000 UTC m=+451.101412797" Dec 03 00:13:47 crc kubenswrapper[4805]: I1203 00:13:47.275122 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wcfkh" podStartSLOduration=1.733304071 podStartE2EDuration="4.27509707s" podCreationTimestamp="2025-12-03 00:13:43 +0000 UTC" firstStartedPulling="2025-12-03 00:13:44.171928633 +0000 UTC m=+448.020891239" lastFinishedPulling="2025-12-03 00:13:46.713721622 +0000 UTC m=+450.562684238" observedRunningTime="2025-12-03 00:13:47.272932302 +0000 UTC m=+451.121894928" watchObservedRunningTime="2025-12-03 00:13:47.27509707 +0000 UTC m=+451.124059676" Dec 03 00:13:48 crc kubenswrapper[4805]: I1203 00:13:48.237994 4805 generic.go:334] "Generic (PLEG): container finished" podID="566e3605-01a5-487e-9152-a9de0f1aa9e7" containerID="f92a7272add05bf8f0c3f79740d20148ea9ef235a8bbfaf621267be5a12d0f06" exitCode=0 Dec 03 00:13:48 crc kubenswrapper[4805]: I1203 00:13:48.238077 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sn25s" event={"ID":"566e3605-01a5-487e-9152-a9de0f1aa9e7","Type":"ContainerDied","Data":"f92a7272add05bf8f0c3f79740d20148ea9ef235a8bbfaf621267be5a12d0f06"} Dec 03 00:13:48 crc kubenswrapper[4805]: I1203 00:13:48.637002 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-fzrz8" Dec 03 00:13:48 crc kubenswrapper[4805]: I1203 00:13:48.693335 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kdg2s"] Dec 03 00:13:50 crc kubenswrapper[4805]: I1203 00:13:50.256232 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sn25s" event={"ID":"566e3605-01a5-487e-9152-a9de0f1aa9e7","Type":"ContainerStarted","Data":"eea566df79564c3de7cc10db550d46aa4b1329143bcc08240e61e3e7487c8017"} Dec 03 00:13:50 crc kubenswrapper[4805]: I1203 00:13:50.277114 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sn25s" podStartSLOduration=3.484539852 podStartE2EDuration="6.277091283s" podCreationTimestamp="2025-12-03 00:13:44 +0000 UTC" firstStartedPulling="2025-12-03 00:13:46.211242199 +0000 UTC m=+450.060204805" lastFinishedPulling="2025-12-03 00:13:49.00379363 +0000 UTC m=+452.852756236" observedRunningTime="2025-12-03 00:13:50.274021181 +0000 UTC m=+454.122983787" watchObservedRunningTime="2025-12-03 00:13:50.277091283 +0000 UTC m=+454.126053889" Dec 03 00:13:50 crc kubenswrapper[4805]: I1203 00:13:50.950058 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d4vns" Dec 03 00:13:50 crc kubenswrapper[4805]: I1203 00:13:50.950449 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d4vns" Dec 03 00:13:50 crc kubenswrapper[4805]: I1203 00:13:50.998797 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d4vns" Dec 03 00:13:51 crc kubenswrapper[4805]: I1203 00:13:51.310359 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d4vns" Dec 03 00:13:52 crc kubenswrapper[4805]: I1203 00:13:52.963418 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hsx6d" Dec 03 00:13:52 crc kubenswrapper[4805]: I1203 00:13:52.963740 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hsx6d" Dec 03 00:13:53 crc kubenswrapper[4805]: I1203 00:13:53.006895 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hsx6d" Dec 03 00:13:53 crc kubenswrapper[4805]: I1203 00:13:53.312293 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hsx6d" Dec 03 00:13:53 crc kubenswrapper[4805]: I1203 00:13:53.534248 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wcfkh" Dec 03 00:13:53 crc kubenswrapper[4805]: I1203 00:13:53.534317 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wcfkh" Dec 03 00:13:53 crc kubenswrapper[4805]: I1203 00:13:53.577723 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wcfkh" Dec 03 00:13:54 crc kubenswrapper[4805]: I1203 00:13:54.323937 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wcfkh" Dec 03 00:13:55 crc kubenswrapper[4805]: I1203 00:13:55.331139 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sn25s" Dec 03 00:13:55 crc kubenswrapper[4805]: I1203 00:13:55.331713 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sn25s" Dec 03 00:13:55 crc kubenswrapper[4805]: I1203 00:13:55.388262 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sn25s" Dec 03 00:13:56 crc kubenswrapper[4805]: I1203 00:13:56.352361 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sn25s" Dec 03 00:14:13 crc kubenswrapper[4805]: I1203 00:14:13.738697 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" podUID="be50df79-cb92-4c40-81ea-a90cee61b549" containerName="registry" containerID="cri-o://a529c89359277f776733285244226619b151a9fc20740447044309630f905eea" gracePeriod=30 Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.177505 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.241086 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/be50df79-cb92-4c40-81ea-a90cee61b549-ca-trust-extracted\") pod \"be50df79-cb92-4c40-81ea-a90cee61b549\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.241148 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/be50df79-cb92-4c40-81ea-a90cee61b549-installation-pull-secrets\") pod \"be50df79-cb92-4c40-81ea-a90cee61b549\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.241295 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be50df79-cb92-4c40-81ea-a90cee61b549-trusted-ca\") pod \"be50df79-cb92-4c40-81ea-a90cee61b549\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.241363 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/be50df79-cb92-4c40-81ea-a90cee61b549-registry-tls\") pod \"be50df79-cb92-4c40-81ea-a90cee61b549\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.241457 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be50df79-cb92-4c40-81ea-a90cee61b549-bound-sa-token\") pod \"be50df79-cb92-4c40-81ea-a90cee61b549\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.241484 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/be50df79-cb92-4c40-81ea-a90cee61b549-registry-certificates\") pod \"be50df79-cb92-4c40-81ea-a90cee61b549\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.241535 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn4xl\" (UniqueName: \"kubernetes.io/projected/be50df79-cb92-4c40-81ea-a90cee61b549-kube-api-access-cn4xl\") pod \"be50df79-cb92-4c40-81ea-a90cee61b549\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.241881 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"be50df79-cb92-4c40-81ea-a90cee61b549\" (UID: \"be50df79-cb92-4c40-81ea-a90cee61b549\") " Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.244266 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be50df79-cb92-4c40-81ea-a90cee61b549-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "be50df79-cb92-4c40-81ea-a90cee61b549" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.244790 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be50df79-cb92-4c40-81ea-a90cee61b549-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "be50df79-cb92-4c40-81ea-a90cee61b549" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.253127 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be50df79-cb92-4c40-81ea-a90cee61b549-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "be50df79-cb92-4c40-81ea-a90cee61b549" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.253213 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be50df79-cb92-4c40-81ea-a90cee61b549-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "be50df79-cb92-4c40-81ea-a90cee61b549" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.255239 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "be50df79-cb92-4c40-81ea-a90cee61b549" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.256695 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be50df79-cb92-4c40-81ea-a90cee61b549-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "be50df79-cb92-4c40-81ea-a90cee61b549" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.257086 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be50df79-cb92-4c40-81ea-a90cee61b549-kube-api-access-cn4xl" (OuterVolumeSpecName: "kube-api-access-cn4xl") pod "be50df79-cb92-4c40-81ea-a90cee61b549" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549"). InnerVolumeSpecName "kube-api-access-cn4xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.262323 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be50df79-cb92-4c40-81ea-a90cee61b549-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "be50df79-cb92-4c40-81ea-a90cee61b549" (UID: "be50df79-cb92-4c40-81ea-a90cee61b549"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.343715 4805 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be50df79-cb92-4c40-81ea-a90cee61b549-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.343752 4805 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/be50df79-cb92-4c40-81ea-a90cee61b549-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.343764 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn4xl\" (UniqueName: \"kubernetes.io/projected/be50df79-cb92-4c40-81ea-a90cee61b549-kube-api-access-cn4xl\") on node \"crc\" DevicePath \"\"" Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.343774 4805 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/be50df79-cb92-4c40-81ea-a90cee61b549-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.343783 4805 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/be50df79-cb92-4c40-81ea-a90cee61b549-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.343791 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be50df79-cb92-4c40-81ea-a90cee61b549-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.343800 4805 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/be50df79-cb92-4c40-81ea-a90cee61b549-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.435356 4805 generic.go:334] "Generic (PLEG): container finished" podID="be50df79-cb92-4c40-81ea-a90cee61b549" containerID="a529c89359277f776733285244226619b151a9fc20740447044309630f905eea" exitCode=0 Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.435416 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" event={"ID":"be50df79-cb92-4c40-81ea-a90cee61b549","Type":"ContainerDied","Data":"a529c89359277f776733285244226619b151a9fc20740447044309630f905eea"} Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.435455 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" event={"ID":"be50df79-cb92-4c40-81ea-a90cee61b549","Type":"ContainerDied","Data":"9b4829b6438226519b8869926ced3f0afc6362abfb00273c5978a9692f4374e9"} Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.435476 4805 scope.go:117] "RemoveContainer" containerID="a529c89359277f776733285244226619b151a9fc20740447044309630f905eea" Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.435426 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kdg2s" Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.455142 4805 scope.go:117] "RemoveContainer" containerID="a529c89359277f776733285244226619b151a9fc20740447044309630f905eea" Dec 03 00:14:14 crc kubenswrapper[4805]: E1203 00:14:14.456093 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a529c89359277f776733285244226619b151a9fc20740447044309630f905eea\": container with ID starting with a529c89359277f776733285244226619b151a9fc20740447044309630f905eea not found: ID does not exist" containerID="a529c89359277f776733285244226619b151a9fc20740447044309630f905eea" Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.456126 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a529c89359277f776733285244226619b151a9fc20740447044309630f905eea"} err="failed to get container status \"a529c89359277f776733285244226619b151a9fc20740447044309630f905eea\": rpc error: code = NotFound desc = could not find container \"a529c89359277f776733285244226619b151a9fc20740447044309630f905eea\": container with ID starting with a529c89359277f776733285244226619b151a9fc20740447044309630f905eea not found: ID does not exist" Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.466925 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kdg2s"] Dec 03 00:14:14 crc kubenswrapper[4805]: I1203 00:14:14.470909 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kdg2s"] Dec 03 00:14:16 crc kubenswrapper[4805]: I1203 00:14:16.435580 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be50df79-cb92-4c40-81ea-a90cee61b549" path="/var/lib/kubelet/pods/be50df79-cb92-4c40-81ea-a90cee61b549/volumes" Dec 03 00:15:00 crc kubenswrapper[4805]: I1203 00:15:00.178236 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412015-2mzpq"] Dec 03 00:15:00 crc kubenswrapper[4805]: E1203 00:15:00.179171 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be50df79-cb92-4c40-81ea-a90cee61b549" containerName="registry" Dec 03 00:15:00 crc kubenswrapper[4805]: I1203 00:15:00.179190 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="be50df79-cb92-4c40-81ea-a90cee61b549" containerName="registry" Dec 03 00:15:00 crc kubenswrapper[4805]: I1203 00:15:00.179396 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="be50df79-cb92-4c40-81ea-a90cee61b549" containerName="registry" Dec 03 00:15:00 crc kubenswrapper[4805]: I1203 00:15:00.179958 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-2mzpq" Dec 03 00:15:00 crc kubenswrapper[4805]: I1203 00:15:00.182592 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 00:15:00 crc kubenswrapper[4805]: I1203 00:15:00.182601 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 00:15:00 crc kubenswrapper[4805]: I1203 00:15:00.194570 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412015-2mzpq"] Dec 03 00:15:00 crc kubenswrapper[4805]: I1203 00:15:00.335870 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b26vs\" (UniqueName: \"kubernetes.io/projected/1ffa5929-9309-47fb-a366-347d4c696ccb-kube-api-access-b26vs\") pod \"collect-profiles-29412015-2mzpq\" (UID: \"1ffa5929-9309-47fb-a366-347d4c696ccb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-2mzpq" Dec 03 00:15:00 crc kubenswrapper[4805]: I1203 00:15:00.335952 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ffa5929-9309-47fb-a366-347d4c696ccb-config-volume\") pod \"collect-profiles-29412015-2mzpq\" (UID: \"1ffa5929-9309-47fb-a366-347d4c696ccb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-2mzpq" Dec 03 00:15:00 crc kubenswrapper[4805]: I1203 00:15:00.336024 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ffa5929-9309-47fb-a366-347d4c696ccb-secret-volume\") pod \"collect-profiles-29412015-2mzpq\" (UID: \"1ffa5929-9309-47fb-a366-347d4c696ccb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-2mzpq" Dec 03 00:15:00 crc kubenswrapper[4805]: I1203 00:15:00.437364 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ffa5929-9309-47fb-a366-347d4c696ccb-config-volume\") pod \"collect-profiles-29412015-2mzpq\" (UID: \"1ffa5929-9309-47fb-a366-347d4c696ccb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-2mzpq" Dec 03 00:15:00 crc kubenswrapper[4805]: I1203 00:15:00.437430 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ffa5929-9309-47fb-a366-347d4c696ccb-secret-volume\") pod \"collect-profiles-29412015-2mzpq\" (UID: \"1ffa5929-9309-47fb-a366-347d4c696ccb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-2mzpq" Dec 03 00:15:00 crc kubenswrapper[4805]: I1203 00:15:00.437517 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b26vs\" (UniqueName: \"kubernetes.io/projected/1ffa5929-9309-47fb-a366-347d4c696ccb-kube-api-access-b26vs\") pod \"collect-profiles-29412015-2mzpq\" (UID: \"1ffa5929-9309-47fb-a366-347d4c696ccb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-2mzpq" Dec 03 00:15:00 crc kubenswrapper[4805]: I1203 00:15:00.438342 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ffa5929-9309-47fb-a366-347d4c696ccb-config-volume\") pod \"collect-profiles-29412015-2mzpq\" (UID: \"1ffa5929-9309-47fb-a366-347d4c696ccb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-2mzpq" Dec 03 00:15:00 crc kubenswrapper[4805]: I1203 00:15:00.448108 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ffa5929-9309-47fb-a366-347d4c696ccb-secret-volume\") pod \"collect-profiles-29412015-2mzpq\" (UID: \"1ffa5929-9309-47fb-a366-347d4c696ccb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-2mzpq" Dec 03 00:15:00 crc kubenswrapper[4805]: I1203 00:15:00.457368 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b26vs\" (UniqueName: \"kubernetes.io/projected/1ffa5929-9309-47fb-a366-347d4c696ccb-kube-api-access-b26vs\") pod \"collect-profiles-29412015-2mzpq\" (UID: \"1ffa5929-9309-47fb-a366-347d4c696ccb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-2mzpq" Dec 03 00:15:00 crc kubenswrapper[4805]: I1203 00:15:00.509255 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-2mzpq" Dec 03 00:15:00 crc kubenswrapper[4805]: I1203 00:15:00.714224 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412015-2mzpq"] Dec 03 00:15:00 crc kubenswrapper[4805]: W1203 00:15:00.724924 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffa5929_9309_47fb_a366_347d4c696ccb.slice/crio-50c626dbbacad56f891fd03f9b3a91ed906c3728ffe59ab9bd5847670661e92b WatchSource:0}: Error finding container 50c626dbbacad56f891fd03f9b3a91ed906c3728ffe59ab9bd5847670661e92b: Status 404 returned error can't find the container with id 50c626dbbacad56f891fd03f9b3a91ed906c3728ffe59ab9bd5847670661e92b Dec 03 00:15:01 crc kubenswrapper[4805]: I1203 00:15:01.721869 4805 generic.go:334] "Generic (PLEG): container finished" podID="1ffa5929-9309-47fb-a366-347d4c696ccb" containerID="0aa6db3a2c65c0b2c333459f9e4036519a70cc2668b77c9a512ecd8abe94ffd2" exitCode=0 Dec 03 00:15:01 crc kubenswrapper[4805]: I1203 00:15:01.721984 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-2mzpq" event={"ID":"1ffa5929-9309-47fb-a366-347d4c696ccb","Type":"ContainerDied","Data":"0aa6db3a2c65c0b2c333459f9e4036519a70cc2668b77c9a512ecd8abe94ffd2"} Dec 03 00:15:01 crc kubenswrapper[4805]: I1203 00:15:01.722254 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-2mzpq" event={"ID":"1ffa5929-9309-47fb-a366-347d4c696ccb","Type":"ContainerStarted","Data":"50c626dbbacad56f891fd03f9b3a91ed906c3728ffe59ab9bd5847670661e92b"} Dec 03 00:15:02 crc kubenswrapper[4805]: I1203 00:15:02.933005 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-2mzpq" Dec 03 00:15:03 crc kubenswrapper[4805]: I1203 00:15:03.075876 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ffa5929-9309-47fb-a366-347d4c696ccb-config-volume\") pod \"1ffa5929-9309-47fb-a366-347d4c696ccb\" (UID: \"1ffa5929-9309-47fb-a366-347d4c696ccb\") " Dec 03 00:15:03 crc kubenswrapper[4805]: I1203 00:15:03.075923 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ffa5929-9309-47fb-a366-347d4c696ccb-secret-volume\") pod \"1ffa5929-9309-47fb-a366-347d4c696ccb\" (UID: \"1ffa5929-9309-47fb-a366-347d4c696ccb\") " Dec 03 00:15:03 crc kubenswrapper[4805]: I1203 00:15:03.075996 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b26vs\" (UniqueName: \"kubernetes.io/projected/1ffa5929-9309-47fb-a366-347d4c696ccb-kube-api-access-b26vs\") pod \"1ffa5929-9309-47fb-a366-347d4c696ccb\" (UID: \"1ffa5929-9309-47fb-a366-347d4c696ccb\") " Dec 03 00:15:03 crc kubenswrapper[4805]: I1203 00:15:03.076834 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ffa5929-9309-47fb-a366-347d4c696ccb-config-volume" (OuterVolumeSpecName: "config-volume") pod "1ffa5929-9309-47fb-a366-347d4c696ccb" (UID: "1ffa5929-9309-47fb-a366-347d4c696ccb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:15:03 crc kubenswrapper[4805]: I1203 00:15:03.077247 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ffa5929-9309-47fb-a366-347d4c696ccb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:03 crc kubenswrapper[4805]: I1203 00:15:03.081336 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ffa5929-9309-47fb-a366-347d4c696ccb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1ffa5929-9309-47fb-a366-347d4c696ccb" (UID: "1ffa5929-9309-47fb-a366-347d4c696ccb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:15:03 crc kubenswrapper[4805]: I1203 00:15:03.081471 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ffa5929-9309-47fb-a366-347d4c696ccb-kube-api-access-b26vs" (OuterVolumeSpecName: "kube-api-access-b26vs") pod "1ffa5929-9309-47fb-a366-347d4c696ccb" (UID: "1ffa5929-9309-47fb-a366-347d4c696ccb"). InnerVolumeSpecName "kube-api-access-b26vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:15:03 crc kubenswrapper[4805]: I1203 00:15:03.178036 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ffa5929-9309-47fb-a366-347d4c696ccb-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:03 crc kubenswrapper[4805]: I1203 00:15:03.178457 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b26vs\" (UniqueName: \"kubernetes.io/projected/1ffa5929-9309-47fb-a366-347d4c696ccb-kube-api-access-b26vs\") on node \"crc\" DevicePath \"\"" Dec 03 00:15:03 crc kubenswrapper[4805]: I1203 00:15:03.735137 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-2mzpq" event={"ID":"1ffa5929-9309-47fb-a366-347d4c696ccb","Type":"ContainerDied","Data":"50c626dbbacad56f891fd03f9b3a91ed906c3728ffe59ab9bd5847670661e92b"} Dec 03 00:15:03 crc kubenswrapper[4805]: I1203 00:15:03.735191 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50c626dbbacad56f891fd03f9b3a91ed906c3728ffe59ab9bd5847670661e92b" Dec 03 00:15:03 crc kubenswrapper[4805]: I1203 00:15:03.735286 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412015-2mzpq" Dec 03 00:15:16 crc kubenswrapper[4805]: I1203 00:15:16.655233 4805 scope.go:117] "RemoveContainer" containerID="d8a4827fd15ca32907e9a0b422f14dee422d1fa444b568bfcc1b4f4aff8be17b" Dec 03 00:15:47 crc kubenswrapper[4805]: I1203 00:15:47.811876 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:15:47 crc kubenswrapper[4805]: I1203 00:15:47.812655 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:16:17 crc kubenswrapper[4805]: I1203 00:16:17.811637 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:16:17 crc kubenswrapper[4805]: I1203 00:16:17.812264 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:16:47 crc kubenswrapper[4805]: I1203 00:16:47.811806 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:16:47 crc kubenswrapper[4805]: I1203 00:16:47.812471 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:16:47 crc kubenswrapper[4805]: I1203 00:16:47.812540 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:16:47 crc kubenswrapper[4805]: I1203 00:16:47.813333 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2d57f60c8e52a89583b1e40f506517f73a5b87757f993a4d20080eabc8d60d72"} pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:16:47 crc kubenswrapper[4805]: I1203 00:16:47.813391 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" containerID="cri-o://2d57f60c8e52a89583b1e40f506517f73a5b87757f993a4d20080eabc8d60d72" gracePeriod=600 Dec 03 00:16:48 crc kubenswrapper[4805]: I1203 00:16:48.338940 4805 generic.go:334] "Generic (PLEG): container finished" podID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerID="2d57f60c8e52a89583b1e40f506517f73a5b87757f993a4d20080eabc8d60d72" exitCode=0 Dec 03 00:16:48 crc kubenswrapper[4805]: I1203 00:16:48.339029 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" event={"ID":"42d6da4d-d781-4243-b5c3-28a8cf91ef53","Type":"ContainerDied","Data":"2d57f60c8e52a89583b1e40f506517f73a5b87757f993a4d20080eabc8d60d72"} Dec 03 00:16:48 crc kubenswrapper[4805]: I1203 00:16:48.339509 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" event={"ID":"42d6da4d-d781-4243-b5c3-28a8cf91ef53","Type":"ContainerStarted","Data":"c1e997872ae5cc6752d7b081f3a651fb9d62664b89bdfb81c87803944fd10204"} Dec 03 00:16:48 crc kubenswrapper[4805]: I1203 00:16:48.339554 4805 scope.go:117] "RemoveContainer" containerID="b197c067b1f4b6da9cb594f04bc4f3715facaffce52939947e8f8684e3a78115" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.182520 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k6pk5"] Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.183827 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovn-controller" containerID="cri-o://c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307" gracePeriod=30 Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.184027 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="northd" containerID="cri-o://13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512" gracePeriod=30 Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.184067 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9" gracePeriod=30 Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.184119 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovn-acl-logging" containerID="cri-o://f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d" gracePeriod=30 Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.184174 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="nbdb" containerID="cri-o://07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d" gracePeriod=30 Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.184242 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="kube-rbac-proxy-node" containerID="cri-o://5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15" gracePeriod=30 Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.184152 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="sbdb" containerID="cri-o://f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0" gracePeriod=30 Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.238957 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovnkube-controller" containerID="cri-o://396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496" gracePeriod=30 Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.476596 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6pk5_2dbad567-2c97-49dd-ac90-41fd66a3b606/ovnkube-controller/3.log" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.478823 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6pk5_2dbad567-2c97-49dd-ac90-41fd66a3b606/ovn-acl-logging/0.log" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.479299 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6pk5_2dbad567-2c97-49dd-ac90-41fd66a3b606/ovn-controller/0.log" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.479785 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.541438 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pdgzd"] Dec 03 00:18:40 crc kubenswrapper[4805]: E1203 00:18:40.541782 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="northd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.541803 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="northd" Dec 03 00:18:40 crc kubenswrapper[4805]: E1203 00:18:40.541817 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovn-acl-logging" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.541824 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovn-acl-logging" Dec 03 00:18:40 crc kubenswrapper[4805]: E1203 00:18:40.541832 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovnkube-controller" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.541840 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovnkube-controller" Dec 03 00:18:40 crc kubenswrapper[4805]: E1203 00:18:40.541847 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="sbdb" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.541853 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="sbdb" Dec 03 00:18:40 crc kubenswrapper[4805]: E1203 00:18:40.541859 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.541865 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 00:18:40 crc kubenswrapper[4805]: E1203 00:18:40.541875 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="kube-rbac-proxy-node" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.541881 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="kube-rbac-proxy-node" Dec 03 00:18:40 crc kubenswrapper[4805]: E1203 00:18:40.541894 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovn-controller" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.541900 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovn-controller" Dec 03 00:18:40 crc kubenswrapper[4805]: E1203 00:18:40.541909 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovnkube-controller" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.541914 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovnkube-controller" Dec 03 00:18:40 crc kubenswrapper[4805]: E1203 00:18:40.541921 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="kubecfg-setup" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.541928 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="kubecfg-setup" Dec 03 00:18:40 crc kubenswrapper[4805]: E1203 00:18:40.541939 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovnkube-controller" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.541945 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovnkube-controller" Dec 03 00:18:40 crc kubenswrapper[4805]: E1203 00:18:40.541953 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ffa5929-9309-47fb-a366-347d4c696ccb" containerName="collect-profiles" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.541959 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ffa5929-9309-47fb-a366-347d4c696ccb" containerName="collect-profiles" Dec 03 00:18:40 crc kubenswrapper[4805]: E1203 00:18:40.541968 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="nbdb" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.541974 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="nbdb" Dec 03 00:18:40 crc kubenswrapper[4805]: E1203 00:18:40.541981 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovnkube-controller" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.541987 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovnkube-controller" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.542084 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovnkube-controller" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.542092 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovnkube-controller" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.542102 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovn-acl-logging" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.542111 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovn-controller" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.542122 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="nbdb" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.542130 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovnkube-controller" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.542139 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="northd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.542148 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.542153 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ffa5929-9309-47fb-a366-347d4c696ccb" containerName="collect-profiles" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.542160 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="kube-rbac-proxy-node" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.542168 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="sbdb" Dec 03 00:18:40 crc kubenswrapper[4805]: E1203 00:18:40.542305 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovnkube-controller" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.542314 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovnkube-controller" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.542413 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovnkube-controller" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.542622 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerName="ovnkube-controller" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.544183 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.572015 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6mm9\" (UniqueName: \"kubernetes.io/projected/2dbad567-2c97-49dd-ac90-41fd66a3b606-kube-api-access-l6mm9\") pod \"2dbad567-2c97-49dd-ac90-41fd66a3b606\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.572093 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-systemd-units\") pod \"2dbad567-2c97-49dd-ac90-41fd66a3b606\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.572174 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2dbad567-2c97-49dd-ac90-41fd66a3b606-env-overrides\") pod \"2dbad567-2c97-49dd-ac90-41fd66a3b606\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.572227 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2dbad567-2c97-49dd-ac90-41fd66a3b606-ovnkube-script-lib\") pod \"2dbad567-2c97-49dd-ac90-41fd66a3b606\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.572258 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-var-lib-cni-networks-ovn-kubernetes\") pod \"2dbad567-2c97-49dd-ac90-41fd66a3b606\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.572283 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-cni-bin\") pod \"2dbad567-2c97-49dd-ac90-41fd66a3b606\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.572316 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-run-systemd\") pod \"2dbad567-2c97-49dd-ac90-41fd66a3b606\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.572344 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-etc-openvswitch\") pod \"2dbad567-2c97-49dd-ac90-41fd66a3b606\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.572432 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-run-ovn\") pod \"2dbad567-2c97-49dd-ac90-41fd66a3b606\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.572465 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2dbad567-2c97-49dd-ac90-41fd66a3b606-ovn-node-metrics-cert\") pod \"2dbad567-2c97-49dd-ac90-41fd66a3b606\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.572495 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2dbad567-2c97-49dd-ac90-41fd66a3b606-ovnkube-config\") pod \"2dbad567-2c97-49dd-ac90-41fd66a3b606\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.572522 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-node-log\") pod \"2dbad567-2c97-49dd-ac90-41fd66a3b606\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.572551 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-log-socket\") pod \"2dbad567-2c97-49dd-ac90-41fd66a3b606\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.572574 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-kubelet\") pod \"2dbad567-2c97-49dd-ac90-41fd66a3b606\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.572637 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-slash\") pod \"2dbad567-2c97-49dd-ac90-41fd66a3b606\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.572666 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-cni-netd\") pod \"2dbad567-2c97-49dd-ac90-41fd66a3b606\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.572693 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-run-ovn-kubernetes\") pod \"2dbad567-2c97-49dd-ac90-41fd66a3b606\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.572712 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-var-lib-openvswitch\") pod \"2dbad567-2c97-49dd-ac90-41fd66a3b606\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.572736 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-run-netns\") pod \"2dbad567-2c97-49dd-ac90-41fd66a3b606\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.572755 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-run-openvswitch\") pod \"2dbad567-2c97-49dd-ac90-41fd66a3b606\" (UID: \"2dbad567-2c97-49dd-ac90-41fd66a3b606\") " Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.572938 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-run-openvswitch\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.572987 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-etc-openvswitch\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.573019 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb924494-bcba-4fb8-abdf-54974f9d8344-env-overrides\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.573047 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.573073 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-systemd-units\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.573098 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqkc4\" (UniqueName: \"kubernetes.io/projected/cb924494-bcba-4fb8-abdf-54974f9d8344-kube-api-access-jqkc4\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.573146 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-host-run-ovn-kubernetes\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.573183 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-host-kubelet\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.573238 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb924494-bcba-4fb8-abdf-54974f9d8344-ovnkube-script-lib\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.573263 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-host-run-netns\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.573286 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-node-log\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.573313 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb924494-bcba-4fb8-abdf-54974f9d8344-ovnkube-config\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.573337 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-host-slash\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.573359 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-run-systemd\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.573385 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-log-socket\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.573407 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-host-cni-bin\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.573436 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-run-ovn\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.573458 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-var-lib-openvswitch\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.573484 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-host-cni-netd\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.573512 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb924494-bcba-4fb8-abdf-54974f9d8344-ovn-node-metrics-cert\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.574107 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "2dbad567-2c97-49dd-ac90-41fd66a3b606" (UID: "2dbad567-2c97-49dd-ac90-41fd66a3b606"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.574184 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "2dbad567-2c97-49dd-ac90-41fd66a3b606" (UID: "2dbad567-2c97-49dd-ac90-41fd66a3b606"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.574228 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "2dbad567-2c97-49dd-ac90-41fd66a3b606" (UID: "2dbad567-2c97-49dd-ac90-41fd66a3b606"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.574250 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "2dbad567-2c97-49dd-ac90-41fd66a3b606" (UID: "2dbad567-2c97-49dd-ac90-41fd66a3b606"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.574237 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-slash" (OuterVolumeSpecName: "host-slash") pod "2dbad567-2c97-49dd-ac90-41fd66a3b606" (UID: "2dbad567-2c97-49dd-ac90-41fd66a3b606"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.574270 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "2dbad567-2c97-49dd-ac90-41fd66a3b606" (UID: "2dbad567-2c97-49dd-ac90-41fd66a3b606"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.574250 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "2dbad567-2c97-49dd-ac90-41fd66a3b606" (UID: "2dbad567-2c97-49dd-ac90-41fd66a3b606"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.574274 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "2dbad567-2c97-49dd-ac90-41fd66a3b606" (UID: "2dbad567-2c97-49dd-ac90-41fd66a3b606"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.574697 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dbad567-2c97-49dd-ac90-41fd66a3b606-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2dbad567-2c97-49dd-ac90-41fd66a3b606" (UID: "2dbad567-2c97-49dd-ac90-41fd66a3b606"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.574720 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dbad567-2c97-49dd-ac90-41fd66a3b606-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2dbad567-2c97-49dd-ac90-41fd66a3b606" (UID: "2dbad567-2c97-49dd-ac90-41fd66a3b606"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.574733 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "2dbad567-2c97-49dd-ac90-41fd66a3b606" (UID: "2dbad567-2c97-49dd-ac90-41fd66a3b606"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.574741 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dbad567-2c97-49dd-ac90-41fd66a3b606-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "2dbad567-2c97-49dd-ac90-41fd66a3b606" (UID: "2dbad567-2c97-49dd-ac90-41fd66a3b606"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.574752 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-node-log" (OuterVolumeSpecName: "node-log") pod "2dbad567-2c97-49dd-ac90-41fd66a3b606" (UID: "2dbad567-2c97-49dd-ac90-41fd66a3b606"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.574753 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "2dbad567-2c97-49dd-ac90-41fd66a3b606" (UID: "2dbad567-2c97-49dd-ac90-41fd66a3b606"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.574779 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-log-socket" (OuterVolumeSpecName: "log-socket") pod "2dbad567-2c97-49dd-ac90-41fd66a3b606" (UID: "2dbad567-2c97-49dd-ac90-41fd66a3b606"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.574779 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "2dbad567-2c97-49dd-ac90-41fd66a3b606" (UID: "2dbad567-2c97-49dd-ac90-41fd66a3b606"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.574793 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "2dbad567-2c97-49dd-ac90-41fd66a3b606" (UID: "2dbad567-2c97-49dd-ac90-41fd66a3b606"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.580081 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dbad567-2c97-49dd-ac90-41fd66a3b606-kube-api-access-l6mm9" (OuterVolumeSpecName: "kube-api-access-l6mm9") pod "2dbad567-2c97-49dd-ac90-41fd66a3b606" (UID: "2dbad567-2c97-49dd-ac90-41fd66a3b606"). InnerVolumeSpecName "kube-api-access-l6mm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.580347 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dbad567-2c97-49dd-ac90-41fd66a3b606-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "2dbad567-2c97-49dd-ac90-41fd66a3b606" (UID: "2dbad567-2c97-49dd-ac90-41fd66a3b606"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.588335 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "2dbad567-2c97-49dd-ac90-41fd66a3b606" (UID: "2dbad567-2c97-49dd-ac90-41fd66a3b606"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.674494 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-systemd-units\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.674548 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.674577 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqkc4\" (UniqueName: \"kubernetes.io/projected/cb924494-bcba-4fb8-abdf-54974f9d8344-kube-api-access-jqkc4\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.674620 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-host-run-ovn-kubernetes\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.674643 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-host-kubelet\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.674659 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb924494-bcba-4fb8-abdf-54974f9d8344-ovnkube-script-lib\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.674654 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-systemd-units\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.674722 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-host-run-netns\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.674679 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-host-run-netns\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.674769 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-host-kubelet\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.674773 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-host-run-ovn-kubernetes\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.674794 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.674781 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-node-log\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.674810 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-node-log\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.674865 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb924494-bcba-4fb8-abdf-54974f9d8344-ovnkube-config\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.674901 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-host-slash\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.674926 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-run-systemd\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.674956 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-log-socket\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.674977 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-host-cni-bin\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675027 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-run-ovn\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675051 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-var-lib-openvswitch\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675087 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-host-cni-netd\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675104 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-log-socket\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675130 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-host-slash\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675146 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb924494-bcba-4fb8-abdf-54974f9d8344-ovn-node-metrics-cert\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675156 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-run-systemd\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675254 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-run-ovn\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675273 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-run-openvswitch\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675303 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-host-cni-bin\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675338 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-host-cni-netd\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675346 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-etc-openvswitch\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675367 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-var-lib-openvswitch\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675378 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb924494-bcba-4fb8-abdf-54974f9d8344-env-overrides\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675397 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-run-openvswitch\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675505 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb924494-bcba-4fb8-abdf-54974f9d8344-etc-openvswitch\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675528 4805 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675564 4805 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675579 4805 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675592 4805 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675604 4805 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675617 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6mm9\" (UniqueName: \"kubernetes.io/projected/2dbad567-2c97-49dd-ac90-41fd66a3b606-kube-api-access-l6mm9\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675629 4805 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675642 4805 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2dbad567-2c97-49dd-ac90-41fd66a3b606-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675655 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2dbad567-2c97-49dd-ac90-41fd66a3b606-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675669 4805 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675681 4805 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675693 4805 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675705 4805 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675717 4805 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675728 4805 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2dbad567-2c97-49dd-ac90-41fd66a3b606-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675739 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2dbad567-2c97-49dd-ac90-41fd66a3b606-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675753 4805 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-node-log\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675763 4805 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-log-socket\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675774 4805 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.675788 4805 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2dbad567-2c97-49dd-ac90-41fd66a3b606-host-slash\") on node \"crc\" DevicePath \"\"" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.676033 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb924494-bcba-4fb8-abdf-54974f9d8344-env-overrides\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.676116 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb924494-bcba-4fb8-abdf-54974f9d8344-ovnkube-script-lib\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.676192 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb924494-bcba-4fb8-abdf-54974f9d8344-ovnkube-config\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.678537 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb924494-bcba-4fb8-abdf-54974f9d8344-ovn-node-metrics-cert\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.689767 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqkc4\" (UniqueName: \"kubernetes.io/projected/cb924494-bcba-4fb8-abdf-54974f9d8344-kube-api-access-jqkc4\") pod \"ovnkube-node-pdgzd\" (UID: \"cb924494-bcba-4fb8-abdf-54974f9d8344\") " pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:40 crc kubenswrapper[4805]: I1203 00:18:40.867054 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.025927 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lllfh_839326a5-41df-492f-83c4-3ee9e2964dc8/kube-multus/2.log" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.026485 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lllfh_839326a5-41df-492f-83c4-3ee9e2964dc8/kube-multus/1.log" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.026538 4805 generic.go:334] "Generic (PLEG): container finished" podID="839326a5-41df-492f-83c4-3ee9e2964dc8" containerID="33dd044a1c452ea0329e56530bf4040c12943c54a9f1455b5eda6d0509b05c15" exitCode=2 Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.026598 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lllfh" event={"ID":"839326a5-41df-492f-83c4-3ee9e2964dc8","Type":"ContainerDied","Data":"33dd044a1c452ea0329e56530bf4040c12943c54a9f1455b5eda6d0509b05c15"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.026636 4805 scope.go:117] "RemoveContainer" containerID="00da5927fb0861f1f7e960b661c25cc0fe480a8e82b8708f9d7df8b11ac130fe" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.027216 4805 scope.go:117] "RemoveContainer" containerID="33dd044a1c452ea0329e56530bf4040c12943c54a9f1455b5eda6d0509b05c15" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.030625 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6pk5_2dbad567-2c97-49dd-ac90-41fd66a3b606/ovnkube-controller/3.log" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.035869 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6pk5_2dbad567-2c97-49dd-ac90-41fd66a3b606/ovn-acl-logging/0.log" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.036462 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k6pk5_2dbad567-2c97-49dd-ac90-41fd66a3b606/ovn-controller/0.log" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.036916 4805 generic.go:334] "Generic (PLEG): container finished" podID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerID="396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496" exitCode=0 Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.036944 4805 generic.go:334] "Generic (PLEG): container finished" podID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerID="f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0" exitCode=0 Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.036953 4805 generic.go:334] "Generic (PLEG): container finished" podID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerID="07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d" exitCode=0 Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.036959 4805 generic.go:334] "Generic (PLEG): container finished" podID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerID="13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512" exitCode=0 Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.036967 4805 generic.go:334] "Generic (PLEG): container finished" podID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerID="a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9" exitCode=0 Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.036976 4805 generic.go:334] "Generic (PLEG): container finished" podID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerID="5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15" exitCode=0 Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037023 4805 generic.go:334] "Generic (PLEG): container finished" podID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerID="f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d" exitCode=143 Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037032 4805 generic.go:334] "Generic (PLEG): container finished" podID="2dbad567-2c97-49dd-ac90-41fd66a3b606" containerID="c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307" exitCode=143 Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037074 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerDied","Data":"396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037108 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerDied","Data":"f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037119 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerDied","Data":"07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037131 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerDied","Data":"13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037140 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerDied","Data":"a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037149 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerDied","Data":"5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037159 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037170 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037175 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037180 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037185 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037190 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037208 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037213 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037218 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037224 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037233 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerDied","Data":"f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037242 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037250 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037257 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037263 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037270 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037276 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037283 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037289 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037294 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037300 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037307 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerDied","Data":"c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037314 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037320 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037328 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037333 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037338 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037344 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037349 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037355 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037361 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037366 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037373 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" event={"ID":"2dbad567-2c97-49dd-ac90-41fd66a3b606","Type":"ContainerDied","Data":"3bbe2aa9c6561c04bb61c96f8415981d1b789bcd8512595e3d832e2d6f157536"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037380 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037386 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037392 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037397 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037402 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037407 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037413 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037419 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037424 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037431 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.037570 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k6pk5" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.041791 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" event={"ID":"cb924494-bcba-4fb8-abdf-54974f9d8344","Type":"ContainerStarted","Data":"37fc477dd467863fc1109fdef35fa7347aeeed517eae46756f11016842ec9fcc"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.041852 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" event={"ID":"cb924494-bcba-4fb8-abdf-54974f9d8344","Type":"ContainerStarted","Data":"9bd33153f07ec363014b6af8d4944fa708c725a32666f721b299b5d94409bab2"} Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.096084 4805 scope.go:117] "RemoveContainer" containerID="396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.119963 4805 scope.go:117] "RemoveContainer" containerID="9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.132770 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k6pk5"] Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.138977 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k6pk5"] Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.159596 4805 scope.go:117] "RemoveContainer" containerID="f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.177217 4805 scope.go:117] "RemoveContainer" containerID="07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.199155 4805 scope.go:117] "RemoveContainer" containerID="13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.215989 4805 scope.go:117] "RemoveContainer" containerID="a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.227333 4805 scope.go:117] "RemoveContainer" containerID="5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.253030 4805 scope.go:117] "RemoveContainer" containerID="f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.279461 4805 scope.go:117] "RemoveContainer" containerID="c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.295319 4805 scope.go:117] "RemoveContainer" containerID="73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.312251 4805 scope.go:117] "RemoveContainer" containerID="396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496" Dec 03 00:18:41 crc kubenswrapper[4805]: E1203 00:18:41.317371 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496\": container with ID starting with 396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496 not found: ID does not exist" containerID="396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.317410 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496"} err="failed to get container status \"396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496\": rpc error: code = NotFound desc = could not find container \"396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496\": container with ID starting with 396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.317436 4805 scope.go:117] "RemoveContainer" containerID="9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0" Dec 03 00:18:41 crc kubenswrapper[4805]: E1203 00:18:41.317780 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0\": container with ID starting with 9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0 not found: ID does not exist" containerID="9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.317805 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0"} err="failed to get container status \"9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0\": rpc error: code = NotFound desc = could not find container \"9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0\": container with ID starting with 9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.317822 4805 scope.go:117] "RemoveContainer" containerID="f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0" Dec 03 00:18:41 crc kubenswrapper[4805]: E1203 00:18:41.320245 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\": container with ID starting with f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0 not found: ID does not exist" containerID="f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.320271 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0"} err="failed to get container status \"f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\": rpc error: code = NotFound desc = could not find container \"f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\": container with ID starting with f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.320286 4805 scope.go:117] "RemoveContainer" containerID="07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d" Dec 03 00:18:41 crc kubenswrapper[4805]: E1203 00:18:41.320675 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\": container with ID starting with 07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d not found: ID does not exist" containerID="07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.320729 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d"} err="failed to get container status \"07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\": rpc error: code = NotFound desc = could not find container \"07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\": container with ID starting with 07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.320767 4805 scope.go:117] "RemoveContainer" containerID="13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512" Dec 03 00:18:41 crc kubenswrapper[4805]: E1203 00:18:41.321359 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\": container with ID starting with 13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512 not found: ID does not exist" containerID="13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.321416 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512"} err="failed to get container status \"13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\": rpc error: code = NotFound desc = could not find container \"13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\": container with ID starting with 13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.321456 4805 scope.go:117] "RemoveContainer" containerID="a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9" Dec 03 00:18:41 crc kubenswrapper[4805]: E1203 00:18:41.323094 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\": container with ID starting with a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9 not found: ID does not exist" containerID="a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.323131 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9"} err="failed to get container status \"a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\": rpc error: code = NotFound desc = could not find container \"a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\": container with ID starting with a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.323148 4805 scope.go:117] "RemoveContainer" containerID="5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15" Dec 03 00:18:41 crc kubenswrapper[4805]: E1203 00:18:41.325155 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\": container with ID starting with 5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15 not found: ID does not exist" containerID="5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.325181 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15"} err="failed to get container status \"5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\": rpc error: code = NotFound desc = could not find container \"5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\": container with ID starting with 5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.325612 4805 scope.go:117] "RemoveContainer" containerID="f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d" Dec 03 00:18:41 crc kubenswrapper[4805]: E1203 00:18:41.326606 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\": container with ID starting with f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d not found: ID does not exist" containerID="f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.326672 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d"} err="failed to get container status \"f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\": rpc error: code = NotFound desc = could not find container \"f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\": container with ID starting with f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.326720 4805 scope.go:117] "RemoveContainer" containerID="c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307" Dec 03 00:18:41 crc kubenswrapper[4805]: E1203 00:18:41.327893 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\": container with ID starting with c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307 not found: ID does not exist" containerID="c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.327940 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307"} err="failed to get container status \"c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\": rpc error: code = NotFound desc = could not find container \"c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\": container with ID starting with c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.327972 4805 scope.go:117] "RemoveContainer" containerID="73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f" Dec 03 00:18:41 crc kubenswrapper[4805]: E1203 00:18:41.329981 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\": container with ID starting with 73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f not found: ID does not exist" containerID="73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.330028 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f"} err="failed to get container status \"73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\": rpc error: code = NotFound desc = could not find container \"73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\": container with ID starting with 73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.330058 4805 scope.go:117] "RemoveContainer" containerID="396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.331401 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496"} err="failed to get container status \"396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496\": rpc error: code = NotFound desc = could not find container \"396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496\": container with ID starting with 396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.331426 4805 scope.go:117] "RemoveContainer" containerID="9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.332841 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0"} err="failed to get container status \"9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0\": rpc error: code = NotFound desc = could not find container \"9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0\": container with ID starting with 9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.332889 4805 scope.go:117] "RemoveContainer" containerID="f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.333405 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0"} err="failed to get container status \"f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\": rpc error: code = NotFound desc = could not find container \"f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\": container with ID starting with f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.333444 4805 scope.go:117] "RemoveContainer" containerID="07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.333778 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d"} err="failed to get container status \"07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\": rpc error: code = NotFound desc = could not find container \"07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\": container with ID starting with 07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.333797 4805 scope.go:117] "RemoveContainer" containerID="13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.334080 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512"} err="failed to get container status \"13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\": rpc error: code = NotFound desc = could not find container \"13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\": container with ID starting with 13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.334101 4805 scope.go:117] "RemoveContainer" containerID="a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.334413 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9"} err="failed to get container status \"a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\": rpc error: code = NotFound desc = could not find container \"a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\": container with ID starting with a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.334453 4805 scope.go:117] "RemoveContainer" containerID="5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.334829 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15"} err="failed to get container status \"5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\": rpc error: code = NotFound desc = could not find container \"5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\": container with ID starting with 5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.334863 4805 scope.go:117] "RemoveContainer" containerID="f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.335179 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d"} err="failed to get container status \"f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\": rpc error: code = NotFound desc = could not find container \"f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\": container with ID starting with f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.335237 4805 scope.go:117] "RemoveContainer" containerID="c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.335853 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307"} err="failed to get container status \"c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\": rpc error: code = NotFound desc = could not find container \"c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\": container with ID starting with c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.335872 4805 scope.go:117] "RemoveContainer" containerID="73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.336138 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f"} err="failed to get container status \"73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\": rpc error: code = NotFound desc = could not find container \"73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\": container with ID starting with 73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.336158 4805 scope.go:117] "RemoveContainer" containerID="396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.336440 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496"} err="failed to get container status \"396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496\": rpc error: code = NotFound desc = could not find container \"396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496\": container with ID starting with 396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.336469 4805 scope.go:117] "RemoveContainer" containerID="9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.336755 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0"} err="failed to get container status \"9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0\": rpc error: code = NotFound desc = could not find container \"9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0\": container with ID starting with 9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.336776 4805 scope.go:117] "RemoveContainer" containerID="f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.337087 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0"} err="failed to get container status \"f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\": rpc error: code = NotFound desc = could not find container \"f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\": container with ID starting with f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.337123 4805 scope.go:117] "RemoveContainer" containerID="07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.337457 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d"} err="failed to get container status \"07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\": rpc error: code = NotFound desc = could not find container \"07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\": container with ID starting with 07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.337488 4805 scope.go:117] "RemoveContainer" containerID="13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.337748 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512"} err="failed to get container status \"13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\": rpc error: code = NotFound desc = could not find container \"13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\": container with ID starting with 13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.337773 4805 scope.go:117] "RemoveContainer" containerID="a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.338033 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9"} err="failed to get container status \"a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\": rpc error: code = NotFound desc = could not find container \"a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\": container with ID starting with a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.338065 4805 scope.go:117] "RemoveContainer" containerID="5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.338357 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15"} err="failed to get container status \"5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\": rpc error: code = NotFound desc = could not find container \"5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\": container with ID starting with 5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.338383 4805 scope.go:117] "RemoveContainer" containerID="f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.338839 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d"} err="failed to get container status \"f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\": rpc error: code = NotFound desc = could not find container \"f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\": container with ID starting with f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.338868 4805 scope.go:117] "RemoveContainer" containerID="c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.341661 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307"} err="failed to get container status \"c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\": rpc error: code = NotFound desc = could not find container \"c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\": container with ID starting with c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.341701 4805 scope.go:117] "RemoveContainer" containerID="73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.342142 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f"} err="failed to get container status \"73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\": rpc error: code = NotFound desc = could not find container \"73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\": container with ID starting with 73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.342167 4805 scope.go:117] "RemoveContainer" containerID="396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.342690 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496"} err="failed to get container status \"396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496\": rpc error: code = NotFound desc = could not find container \"396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496\": container with ID starting with 396da02cea707d1282fd90dfa7ac18a5c77f493da041836edb367667cbda4496 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.342711 4805 scope.go:117] "RemoveContainer" containerID="9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.342979 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0"} err="failed to get container status \"9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0\": rpc error: code = NotFound desc = could not find container \"9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0\": container with ID starting with 9eb3b43ceff674315f40e5964bd0ce275583d959e62f7945899d3465d92d7fa0 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.343000 4805 scope.go:117] "RemoveContainer" containerID="f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.343267 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0"} err="failed to get container status \"f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\": rpc error: code = NotFound desc = could not find container \"f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0\": container with ID starting with f63053a5eda2f51c72f0e2773c3c00570db36ba13d5d03706cd8a7c67eeb2ce0 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.343294 4805 scope.go:117] "RemoveContainer" containerID="07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.343535 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d"} err="failed to get container status \"07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\": rpc error: code = NotFound desc = could not find container \"07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d\": container with ID starting with 07cfe73d5d9efb50ce06942821f688b7cc3293c76da080a175ebf362660f230d not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.343554 4805 scope.go:117] "RemoveContainer" containerID="13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.343836 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512"} err="failed to get container status \"13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\": rpc error: code = NotFound desc = could not find container \"13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512\": container with ID starting with 13359b97349e1896b2ae9af284410a7475d98a6b1f492ab3ef14a8b432359512 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.343866 4805 scope.go:117] "RemoveContainer" containerID="a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.344142 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9"} err="failed to get container status \"a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\": rpc error: code = NotFound desc = could not find container \"a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9\": container with ID starting with a96d0376ed49334b08f3d3c5e9698f522c3571881fa4b4004adbc924c77608b9 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.344164 4805 scope.go:117] "RemoveContainer" containerID="5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.344428 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15"} err="failed to get container status \"5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\": rpc error: code = NotFound desc = could not find container \"5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15\": container with ID starting with 5e03c839bed9b4403c5e56e439d0fdacf44a2e00f27dcae1ae7aa82d609b8a15 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.344450 4805 scope.go:117] "RemoveContainer" containerID="f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.344670 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d"} err="failed to get container status \"f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\": rpc error: code = NotFound desc = could not find container \"f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d\": container with ID starting with f00fa37aa66df6ef85678e6253af5eb990b8406cc78bbaa8030f5bc44340a09d not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.344691 4805 scope.go:117] "RemoveContainer" containerID="c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.344884 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307"} err="failed to get container status \"c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\": rpc error: code = NotFound desc = could not find container \"c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307\": container with ID starting with c54d2990941d5ca23d0c70692bc4cfb748e6f54ae2a7a9ff6bb2bb25d5634307 not found: ID does not exist" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.344897 4805 scope.go:117] "RemoveContainer" containerID="73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f" Dec 03 00:18:41 crc kubenswrapper[4805]: I1203 00:18:41.345220 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f"} err="failed to get container status \"73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\": rpc error: code = NotFound desc = could not find container \"73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f\": container with ID starting with 73de32a8097fe15c6498018189c924b7ebdae1a7883580da230fde1cb365336f not found: ID does not exist" Dec 03 00:18:42 crc kubenswrapper[4805]: I1203 00:18:42.053810 4805 generic.go:334] "Generic (PLEG): container finished" podID="cb924494-bcba-4fb8-abdf-54974f9d8344" containerID="37fc477dd467863fc1109fdef35fa7347aeeed517eae46756f11016842ec9fcc" exitCode=0 Dec 03 00:18:42 crc kubenswrapper[4805]: I1203 00:18:42.053872 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" event={"ID":"cb924494-bcba-4fb8-abdf-54974f9d8344","Type":"ContainerDied","Data":"37fc477dd467863fc1109fdef35fa7347aeeed517eae46756f11016842ec9fcc"} Dec 03 00:18:42 crc kubenswrapper[4805]: I1203 00:18:42.053922 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" event={"ID":"cb924494-bcba-4fb8-abdf-54974f9d8344","Type":"ContainerStarted","Data":"0e953204788358625162302b0d30ef79c0018c92f0cc218d96e03eb7577155da"} Dec 03 00:18:42 crc kubenswrapper[4805]: I1203 00:18:42.053943 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" event={"ID":"cb924494-bcba-4fb8-abdf-54974f9d8344","Type":"ContainerStarted","Data":"0d518f31b80ff098c00d2e9a198647b5f63d3ad8deed5a1686373c8dce3d896c"} Dec 03 00:18:42 crc kubenswrapper[4805]: I1203 00:18:42.053959 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" event={"ID":"cb924494-bcba-4fb8-abdf-54974f9d8344","Type":"ContainerStarted","Data":"0c8be1abcef42e9822f3e237596d9af350aed1b6efe968ba3af401103a489b66"} Dec 03 00:18:42 crc kubenswrapper[4805]: I1203 00:18:42.053975 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" event={"ID":"cb924494-bcba-4fb8-abdf-54974f9d8344","Type":"ContainerStarted","Data":"6913a9651e7194210fd1cccfb586c4042d66830fbaced05611a3ccfc0fd723a9"} Dec 03 00:18:42 crc kubenswrapper[4805]: I1203 00:18:42.053994 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" event={"ID":"cb924494-bcba-4fb8-abdf-54974f9d8344","Type":"ContainerStarted","Data":"fb4602758010eb564fa3ea955d53d7657aba31b439a92557b0725bd16da63cab"} Dec 03 00:18:42 crc kubenswrapper[4805]: I1203 00:18:42.054011 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" event={"ID":"cb924494-bcba-4fb8-abdf-54974f9d8344","Type":"ContainerStarted","Data":"a55b16d3902e0d7b6008c4f19a26cc3846d953ca7667e9ff02ef1eada0dd28a9"} Dec 03 00:18:42 crc kubenswrapper[4805]: I1203 00:18:42.056059 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lllfh_839326a5-41df-492f-83c4-3ee9e2964dc8/kube-multus/2.log" Dec 03 00:18:42 crc kubenswrapper[4805]: I1203 00:18:42.056109 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lllfh" event={"ID":"839326a5-41df-492f-83c4-3ee9e2964dc8","Type":"ContainerStarted","Data":"cd4937a9a658fcd837ac24f0f167362dc31ab4043914d9df4889063512d3f945"} Dec 03 00:18:42 crc kubenswrapper[4805]: I1203 00:18:42.430698 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dbad567-2c97-49dd-ac90-41fd66a3b606" path="/var/lib/kubelet/pods/2dbad567-2c97-49dd-ac90-41fd66a3b606/volumes" Dec 03 00:18:44 crc kubenswrapper[4805]: I1203 00:18:44.069178 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" event={"ID":"cb924494-bcba-4fb8-abdf-54974f9d8344","Type":"ContainerStarted","Data":"e47ff1a0acb4504efa8a6e43edc71590d3977508e6fd43c793835b7604c0f496"} Dec 03 00:18:47 crc kubenswrapper[4805]: I1203 00:18:47.094888 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" event={"ID":"cb924494-bcba-4fb8-abdf-54974f9d8344","Type":"ContainerStarted","Data":"9351ac5d368d72c57a9557049e4cc9c87dabf99ddcc97c314980117d6ded80df"} Dec 03 00:18:47 crc kubenswrapper[4805]: I1203 00:18:47.096292 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:47 crc kubenswrapper[4805]: I1203 00:18:47.096310 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:47 crc kubenswrapper[4805]: I1203 00:18:47.155144 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" podStartSLOduration=7.15511811 podStartE2EDuration="7.15511811s" podCreationTimestamp="2025-12-03 00:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:18:47.150919203 +0000 UTC m=+750.999881829" watchObservedRunningTime="2025-12-03 00:18:47.15511811 +0000 UTC m=+751.004080726" Dec 03 00:18:47 crc kubenswrapper[4805]: I1203 00:18:47.171437 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:48 crc kubenswrapper[4805]: I1203 00:18:48.101457 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:18:48 crc kubenswrapper[4805]: I1203 00:18:48.130235 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:19:04 crc kubenswrapper[4805]: I1203 00:19:04.208947 4805 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 00:19:10 crc kubenswrapper[4805]: I1203 00:19:10.900615 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pdgzd" Dec 03 00:19:17 crc kubenswrapper[4805]: I1203 00:19:17.811789 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:19:17 crc kubenswrapper[4805]: I1203 00:19:17.812432 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:19:46 crc kubenswrapper[4805]: I1203 00:19:46.781876 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d4vns"] Dec 03 00:19:46 crc kubenswrapper[4805]: I1203 00:19:46.783241 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d4vns" podUID="8f34256d-c41f-4c52-8d34-5574c5b3e862" containerName="registry-server" containerID="cri-o://c2a32f7d69bf19263bbfa7d50d03e7a139487c77036721c44069d259be8bd4ba" gracePeriod=30 Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.119853 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d4vns" Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.227303 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whnxg\" (UniqueName: \"kubernetes.io/projected/8f34256d-c41f-4c52-8d34-5574c5b3e862-kube-api-access-whnxg\") pod \"8f34256d-c41f-4c52-8d34-5574c5b3e862\" (UID: \"8f34256d-c41f-4c52-8d34-5574c5b3e862\") " Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.227556 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f34256d-c41f-4c52-8d34-5574c5b3e862-utilities\") pod \"8f34256d-c41f-4c52-8d34-5574c5b3e862\" (UID: \"8f34256d-c41f-4c52-8d34-5574c5b3e862\") " Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.227587 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f34256d-c41f-4c52-8d34-5574c5b3e862-catalog-content\") pod \"8f34256d-c41f-4c52-8d34-5574c5b3e862\" (UID: \"8f34256d-c41f-4c52-8d34-5574c5b3e862\") " Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.230373 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f34256d-c41f-4c52-8d34-5574c5b3e862-utilities" (OuterVolumeSpecName: "utilities") pod "8f34256d-c41f-4c52-8d34-5574c5b3e862" (UID: "8f34256d-c41f-4c52-8d34-5574c5b3e862"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.236840 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f34256d-c41f-4c52-8d34-5574c5b3e862-kube-api-access-whnxg" (OuterVolumeSpecName: "kube-api-access-whnxg") pod "8f34256d-c41f-4c52-8d34-5574c5b3e862" (UID: "8f34256d-c41f-4c52-8d34-5574c5b3e862"). InnerVolumeSpecName "kube-api-access-whnxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.247927 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f34256d-c41f-4c52-8d34-5574c5b3e862-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f34256d-c41f-4c52-8d34-5574c5b3e862" (UID: "8f34256d-c41f-4c52-8d34-5574c5b3e862"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.329561 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f34256d-c41f-4c52-8d34-5574c5b3e862-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.329608 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f34256d-c41f-4c52-8d34-5574c5b3e862-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.329624 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whnxg\" (UniqueName: \"kubernetes.io/projected/8f34256d-c41f-4c52-8d34-5574c5b3e862-kube-api-access-whnxg\") on node \"crc\" DevicePath \"\"" Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.461843 4805 generic.go:334] "Generic (PLEG): container finished" podID="8f34256d-c41f-4c52-8d34-5574c5b3e862" containerID="c2a32f7d69bf19263bbfa7d50d03e7a139487c77036721c44069d259be8bd4ba" exitCode=0 Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.461947 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d4vns" Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.461939 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4vns" event={"ID":"8f34256d-c41f-4c52-8d34-5574c5b3e862","Type":"ContainerDied","Data":"c2a32f7d69bf19263bbfa7d50d03e7a139487c77036721c44069d259be8bd4ba"} Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.462011 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4vns" event={"ID":"8f34256d-c41f-4c52-8d34-5574c5b3e862","Type":"ContainerDied","Data":"c6c62f3569a05f19a35efd809fc024d3baed39c5f809028e32c14a71303a6541"} Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.462056 4805 scope.go:117] "RemoveContainer" containerID="c2a32f7d69bf19263bbfa7d50d03e7a139487c77036721c44069d259be8bd4ba" Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.481740 4805 scope.go:117] "RemoveContainer" containerID="3e2cc5362bf4094905d26eb25d1cf9def9553098f9f524e38cb8093330610f88" Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.495167 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d4vns"] Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.501903 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d4vns"] Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.522957 4805 scope.go:117] "RemoveContainer" containerID="5a37407b19705537558c6fa967e93b605930118ac265a4e0fa13cb3828430ede" Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.536458 4805 scope.go:117] "RemoveContainer" containerID="c2a32f7d69bf19263bbfa7d50d03e7a139487c77036721c44069d259be8bd4ba" Dec 03 00:19:47 crc kubenswrapper[4805]: E1203 00:19:47.537277 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2a32f7d69bf19263bbfa7d50d03e7a139487c77036721c44069d259be8bd4ba\": container with ID starting with c2a32f7d69bf19263bbfa7d50d03e7a139487c77036721c44069d259be8bd4ba not found: ID does not exist" containerID="c2a32f7d69bf19263bbfa7d50d03e7a139487c77036721c44069d259be8bd4ba" Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.537315 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2a32f7d69bf19263bbfa7d50d03e7a139487c77036721c44069d259be8bd4ba"} err="failed to get container status \"c2a32f7d69bf19263bbfa7d50d03e7a139487c77036721c44069d259be8bd4ba\": rpc error: code = NotFound desc = could not find container \"c2a32f7d69bf19263bbfa7d50d03e7a139487c77036721c44069d259be8bd4ba\": container with ID starting with c2a32f7d69bf19263bbfa7d50d03e7a139487c77036721c44069d259be8bd4ba not found: ID does not exist" Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.537341 4805 scope.go:117] "RemoveContainer" containerID="3e2cc5362bf4094905d26eb25d1cf9def9553098f9f524e38cb8093330610f88" Dec 03 00:19:47 crc kubenswrapper[4805]: E1203 00:19:47.537780 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e2cc5362bf4094905d26eb25d1cf9def9553098f9f524e38cb8093330610f88\": container with ID starting with 3e2cc5362bf4094905d26eb25d1cf9def9553098f9f524e38cb8093330610f88 not found: ID does not exist" containerID="3e2cc5362bf4094905d26eb25d1cf9def9553098f9f524e38cb8093330610f88" Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.537910 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e2cc5362bf4094905d26eb25d1cf9def9553098f9f524e38cb8093330610f88"} err="failed to get container status \"3e2cc5362bf4094905d26eb25d1cf9def9553098f9f524e38cb8093330610f88\": rpc error: code = NotFound desc = could not find container \"3e2cc5362bf4094905d26eb25d1cf9def9553098f9f524e38cb8093330610f88\": container with ID starting with 3e2cc5362bf4094905d26eb25d1cf9def9553098f9f524e38cb8093330610f88 not found: ID does not exist" Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.538026 4805 scope.go:117] "RemoveContainer" containerID="5a37407b19705537558c6fa967e93b605930118ac265a4e0fa13cb3828430ede" Dec 03 00:19:47 crc kubenswrapper[4805]: E1203 00:19:47.538520 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a37407b19705537558c6fa967e93b605930118ac265a4e0fa13cb3828430ede\": container with ID starting with 5a37407b19705537558c6fa967e93b605930118ac265a4e0fa13cb3828430ede not found: ID does not exist" containerID="5a37407b19705537558c6fa967e93b605930118ac265a4e0fa13cb3828430ede" Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.538548 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a37407b19705537558c6fa967e93b605930118ac265a4e0fa13cb3828430ede"} err="failed to get container status \"5a37407b19705537558c6fa967e93b605930118ac265a4e0fa13cb3828430ede\": rpc error: code = NotFound desc = could not find container \"5a37407b19705537558c6fa967e93b605930118ac265a4e0fa13cb3828430ede\": container with ID starting with 5a37407b19705537558c6fa967e93b605930118ac265a4e0fa13cb3828430ede not found: ID does not exist" Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.810938 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:19:47 crc kubenswrapper[4805]: I1203 00:19:47.811034 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:19:48 crc kubenswrapper[4805]: I1203 00:19:48.430215 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f34256d-c41f-4c52-8d34-5574c5b3e862" path="/var/lib/kubelet/pods/8f34256d-c41f-4c52-8d34-5574c5b3e862/volumes" Dec 03 00:19:50 crc kubenswrapper[4805]: I1203 00:19:50.583876 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk"] Dec 03 00:19:50 crc kubenswrapper[4805]: E1203 00:19:50.584471 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f34256d-c41f-4c52-8d34-5574c5b3e862" containerName="extract-content" Dec 03 00:19:50 crc kubenswrapper[4805]: I1203 00:19:50.584490 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f34256d-c41f-4c52-8d34-5574c5b3e862" containerName="extract-content" Dec 03 00:19:50 crc kubenswrapper[4805]: E1203 00:19:50.584506 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f34256d-c41f-4c52-8d34-5574c5b3e862" containerName="extract-utilities" Dec 03 00:19:50 crc kubenswrapper[4805]: I1203 00:19:50.584515 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f34256d-c41f-4c52-8d34-5574c5b3e862" containerName="extract-utilities" Dec 03 00:19:50 crc kubenswrapper[4805]: E1203 00:19:50.584539 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f34256d-c41f-4c52-8d34-5574c5b3e862" containerName="registry-server" Dec 03 00:19:50 crc kubenswrapper[4805]: I1203 00:19:50.584545 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f34256d-c41f-4c52-8d34-5574c5b3e862" containerName="registry-server" Dec 03 00:19:50 crc kubenswrapper[4805]: I1203 00:19:50.584678 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f34256d-c41f-4c52-8d34-5574c5b3e862" containerName="registry-server" Dec 03 00:19:50 crc kubenswrapper[4805]: I1203 00:19:50.585613 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk" Dec 03 00:19:50 crc kubenswrapper[4805]: I1203 00:19:50.588168 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 03 00:19:50 crc kubenswrapper[4805]: I1203 00:19:50.595282 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk"] Dec 03 00:19:50 crc kubenswrapper[4805]: I1203 00:19:50.674957 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6db602f-7a39-4cad-83c5-4c13ef73feb5-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk\" (UID: \"a6db602f-7a39-4cad-83c5-4c13ef73feb5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk" Dec 03 00:19:50 crc kubenswrapper[4805]: I1203 00:19:50.675332 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6db602f-7a39-4cad-83c5-4c13ef73feb5-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk\" (UID: \"a6db602f-7a39-4cad-83c5-4c13ef73feb5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk" Dec 03 00:19:50 crc kubenswrapper[4805]: I1203 00:19:50.675461 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8t5f\" (UniqueName: \"kubernetes.io/projected/a6db602f-7a39-4cad-83c5-4c13ef73feb5-kube-api-access-p8t5f\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk\" (UID: \"a6db602f-7a39-4cad-83c5-4c13ef73feb5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk" Dec 03 00:19:50 crc kubenswrapper[4805]: I1203 00:19:50.776711 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6db602f-7a39-4cad-83c5-4c13ef73feb5-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk\" (UID: \"a6db602f-7a39-4cad-83c5-4c13ef73feb5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk" Dec 03 00:19:50 crc kubenswrapper[4805]: I1203 00:19:50.777079 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6db602f-7a39-4cad-83c5-4c13ef73feb5-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk\" (UID: \"a6db602f-7a39-4cad-83c5-4c13ef73feb5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk" Dec 03 00:19:50 crc kubenswrapper[4805]: I1203 00:19:50.777218 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8t5f\" (UniqueName: \"kubernetes.io/projected/a6db602f-7a39-4cad-83c5-4c13ef73feb5-kube-api-access-p8t5f\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk\" (UID: \"a6db602f-7a39-4cad-83c5-4c13ef73feb5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk" Dec 03 00:19:50 crc kubenswrapper[4805]: I1203 00:19:50.777492 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6db602f-7a39-4cad-83c5-4c13ef73feb5-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk\" (UID: \"a6db602f-7a39-4cad-83c5-4c13ef73feb5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk" Dec 03 00:19:50 crc kubenswrapper[4805]: I1203 00:19:50.777641 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6db602f-7a39-4cad-83c5-4c13ef73feb5-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk\" (UID: \"a6db602f-7a39-4cad-83c5-4c13ef73feb5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk" Dec 03 00:19:50 crc kubenswrapper[4805]: I1203 00:19:50.798905 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8t5f\" (UniqueName: \"kubernetes.io/projected/a6db602f-7a39-4cad-83c5-4c13ef73feb5-kube-api-access-p8t5f\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk\" (UID: \"a6db602f-7a39-4cad-83c5-4c13ef73feb5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk" Dec 03 00:19:50 crc kubenswrapper[4805]: I1203 00:19:50.960407 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk" Dec 03 00:19:51 crc kubenswrapper[4805]: I1203 00:19:51.194655 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk"] Dec 03 00:19:51 crc kubenswrapper[4805]: I1203 00:19:51.484493 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk" event={"ID":"a6db602f-7a39-4cad-83c5-4c13ef73feb5","Type":"ContainerStarted","Data":"14b8b71b43f332dd168b3d749de5336de09e3856bbc2f004e7cb47f54ebcfcae"} Dec 03 00:19:51 crc kubenswrapper[4805]: I1203 00:19:51.485383 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk" event={"ID":"a6db602f-7a39-4cad-83c5-4c13ef73feb5","Type":"ContainerStarted","Data":"65da011ec90738ba9ee01928f9adb2f6b1de317da8aebafb5dd1b01abf7a00ba"} Dec 03 00:19:52 crc kubenswrapper[4805]: I1203 00:19:52.492181 4805 generic.go:334] "Generic (PLEG): container finished" podID="a6db602f-7a39-4cad-83c5-4c13ef73feb5" containerID="14b8b71b43f332dd168b3d749de5336de09e3856bbc2f004e7cb47f54ebcfcae" exitCode=0 Dec 03 00:19:52 crc kubenswrapper[4805]: I1203 00:19:52.492440 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk" event={"ID":"a6db602f-7a39-4cad-83c5-4c13ef73feb5","Type":"ContainerDied","Data":"14b8b71b43f332dd168b3d749de5336de09e3856bbc2f004e7cb47f54ebcfcae"} Dec 03 00:19:52 crc kubenswrapper[4805]: I1203 00:19:52.495745 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 00:19:53 crc kubenswrapper[4805]: I1203 00:19:53.339963 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-df9vv"] Dec 03 00:19:53 crc kubenswrapper[4805]: I1203 00:19:53.341575 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-df9vv" Dec 03 00:19:53 crc kubenswrapper[4805]: I1203 00:19:53.353189 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-df9vv"] Dec 03 00:19:53 crc kubenswrapper[4805]: I1203 00:19:53.421449 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8904235-7b22-4c3a-a1c6-e107f991b631-utilities\") pod \"redhat-operators-df9vv\" (UID: \"e8904235-7b22-4c3a-a1c6-e107f991b631\") " pod="openshift-marketplace/redhat-operators-df9vv" Dec 03 00:19:53 crc kubenswrapper[4805]: I1203 00:19:53.421527 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8904235-7b22-4c3a-a1c6-e107f991b631-catalog-content\") pod \"redhat-operators-df9vv\" (UID: \"e8904235-7b22-4c3a-a1c6-e107f991b631\") " pod="openshift-marketplace/redhat-operators-df9vv" Dec 03 00:19:53 crc kubenswrapper[4805]: I1203 00:19:53.421606 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf68s\" (UniqueName: \"kubernetes.io/projected/e8904235-7b22-4c3a-a1c6-e107f991b631-kube-api-access-vf68s\") pod \"redhat-operators-df9vv\" (UID: \"e8904235-7b22-4c3a-a1c6-e107f991b631\") " pod="openshift-marketplace/redhat-operators-df9vv" Dec 03 00:19:53 crc kubenswrapper[4805]: I1203 00:19:53.499394 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk" event={"ID":"a6db602f-7a39-4cad-83c5-4c13ef73feb5","Type":"ContainerStarted","Data":"16be55bc1f83140a06472fc4d86353d71d8f3a2252fc7e17583511178c5e688d"} Dec 03 00:19:53 crc kubenswrapper[4805]: I1203 00:19:53.524268 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf68s\" (UniqueName: \"kubernetes.io/projected/e8904235-7b22-4c3a-a1c6-e107f991b631-kube-api-access-vf68s\") pod \"redhat-operators-df9vv\" (UID: \"e8904235-7b22-4c3a-a1c6-e107f991b631\") " pod="openshift-marketplace/redhat-operators-df9vv" Dec 03 00:19:53 crc kubenswrapper[4805]: I1203 00:19:53.525068 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8904235-7b22-4c3a-a1c6-e107f991b631-utilities\") pod \"redhat-operators-df9vv\" (UID: \"e8904235-7b22-4c3a-a1c6-e107f991b631\") " pod="openshift-marketplace/redhat-operators-df9vv" Dec 03 00:19:53 crc kubenswrapper[4805]: I1203 00:19:53.525091 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8904235-7b22-4c3a-a1c6-e107f991b631-catalog-content\") pod \"redhat-operators-df9vv\" (UID: \"e8904235-7b22-4c3a-a1c6-e107f991b631\") " pod="openshift-marketplace/redhat-operators-df9vv" Dec 03 00:19:53 crc kubenswrapper[4805]: I1203 00:19:53.526385 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8904235-7b22-4c3a-a1c6-e107f991b631-catalog-content\") pod \"redhat-operators-df9vv\" (UID: \"e8904235-7b22-4c3a-a1c6-e107f991b631\") " pod="openshift-marketplace/redhat-operators-df9vv" Dec 03 00:19:53 crc kubenswrapper[4805]: I1203 00:19:53.526658 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8904235-7b22-4c3a-a1c6-e107f991b631-utilities\") pod \"redhat-operators-df9vv\" (UID: \"e8904235-7b22-4c3a-a1c6-e107f991b631\") " pod="openshift-marketplace/redhat-operators-df9vv" Dec 03 00:19:53 crc kubenswrapper[4805]: I1203 00:19:53.549165 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf68s\" (UniqueName: \"kubernetes.io/projected/e8904235-7b22-4c3a-a1c6-e107f991b631-kube-api-access-vf68s\") pod \"redhat-operators-df9vv\" (UID: \"e8904235-7b22-4c3a-a1c6-e107f991b631\") " pod="openshift-marketplace/redhat-operators-df9vv" Dec 03 00:19:53 crc kubenswrapper[4805]: I1203 00:19:53.701319 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-df9vv" Dec 03 00:19:53 crc kubenswrapper[4805]: I1203 00:19:53.943368 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-df9vv"] Dec 03 00:19:54 crc kubenswrapper[4805]: I1203 00:19:54.507731 4805 generic.go:334] "Generic (PLEG): container finished" podID="a6db602f-7a39-4cad-83c5-4c13ef73feb5" containerID="16be55bc1f83140a06472fc4d86353d71d8f3a2252fc7e17583511178c5e688d" exitCode=0 Dec 03 00:19:54 crc kubenswrapper[4805]: I1203 00:19:54.507789 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk" event={"ID":"a6db602f-7a39-4cad-83c5-4c13ef73feb5","Type":"ContainerDied","Data":"16be55bc1f83140a06472fc4d86353d71d8f3a2252fc7e17583511178c5e688d"} Dec 03 00:19:54 crc kubenswrapper[4805]: I1203 00:19:54.509766 4805 generic.go:334] "Generic (PLEG): container finished" podID="e8904235-7b22-4c3a-a1c6-e107f991b631" containerID="6a6baa9ccccd49116233d267fc5f2c4ba823a5531d3b6fb130554be79bcdd907" exitCode=0 Dec 03 00:19:54 crc kubenswrapper[4805]: I1203 00:19:54.509827 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-df9vv" event={"ID":"e8904235-7b22-4c3a-a1c6-e107f991b631","Type":"ContainerDied","Data":"6a6baa9ccccd49116233d267fc5f2c4ba823a5531d3b6fb130554be79bcdd907"} Dec 03 00:19:54 crc kubenswrapper[4805]: I1203 00:19:54.509862 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-df9vv" event={"ID":"e8904235-7b22-4c3a-a1c6-e107f991b631","Type":"ContainerStarted","Data":"7414efd8f0089f2d553eadddbfbdab8731bd6ab6fd6beb29a39217653dd9d85e"} Dec 03 00:19:55 crc kubenswrapper[4805]: I1203 00:19:55.517863 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-df9vv" event={"ID":"e8904235-7b22-4c3a-a1c6-e107f991b631","Type":"ContainerStarted","Data":"f72ecbe2a840a680c3013c430507e7180046ac699d51270cff92085f288c7646"} Dec 03 00:19:55 crc kubenswrapper[4805]: I1203 00:19:55.521006 4805 generic.go:334] "Generic (PLEG): container finished" podID="a6db602f-7a39-4cad-83c5-4c13ef73feb5" containerID="847815298a60bb3d220ce95967aa2ee5d2912222e4d3876f01c965a639d325a9" exitCode=0 Dec 03 00:19:55 crc kubenswrapper[4805]: I1203 00:19:55.521067 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk" event={"ID":"a6db602f-7a39-4cad-83c5-4c13ef73feb5","Type":"ContainerDied","Data":"847815298a60bb3d220ce95967aa2ee5d2912222e4d3876f01c965a639d325a9"} Dec 03 00:19:56 crc kubenswrapper[4805]: I1203 00:19:56.531554 4805 generic.go:334] "Generic (PLEG): container finished" podID="e8904235-7b22-4c3a-a1c6-e107f991b631" containerID="f72ecbe2a840a680c3013c430507e7180046ac699d51270cff92085f288c7646" exitCode=0 Dec 03 00:19:56 crc kubenswrapper[4805]: I1203 00:19:56.531726 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-df9vv" event={"ID":"e8904235-7b22-4c3a-a1c6-e107f991b631","Type":"ContainerDied","Data":"f72ecbe2a840a680c3013c430507e7180046ac699d51270cff92085f288c7646"} Dec 03 00:19:56 crc kubenswrapper[4805]: I1203 00:19:56.857007 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk" Dec 03 00:19:56 crc kubenswrapper[4805]: I1203 00:19:56.893103 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6db602f-7a39-4cad-83c5-4c13ef73feb5-util\") pod \"a6db602f-7a39-4cad-83c5-4c13ef73feb5\" (UID: \"a6db602f-7a39-4cad-83c5-4c13ef73feb5\") " Dec 03 00:19:56 crc kubenswrapper[4805]: I1203 00:19:56.893185 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8t5f\" (UniqueName: \"kubernetes.io/projected/a6db602f-7a39-4cad-83c5-4c13ef73feb5-kube-api-access-p8t5f\") pod \"a6db602f-7a39-4cad-83c5-4c13ef73feb5\" (UID: \"a6db602f-7a39-4cad-83c5-4c13ef73feb5\") " Dec 03 00:19:56 crc kubenswrapper[4805]: I1203 00:19:56.893241 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6db602f-7a39-4cad-83c5-4c13ef73feb5-bundle\") pod \"a6db602f-7a39-4cad-83c5-4c13ef73feb5\" (UID: \"a6db602f-7a39-4cad-83c5-4c13ef73feb5\") " Dec 03 00:19:56 crc kubenswrapper[4805]: I1203 00:19:56.895410 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6db602f-7a39-4cad-83c5-4c13ef73feb5-bundle" (OuterVolumeSpecName: "bundle") pod "a6db602f-7a39-4cad-83c5-4c13ef73feb5" (UID: "a6db602f-7a39-4cad-83c5-4c13ef73feb5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:19:56 crc kubenswrapper[4805]: I1203 00:19:56.901660 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6db602f-7a39-4cad-83c5-4c13ef73feb5-kube-api-access-p8t5f" (OuterVolumeSpecName: "kube-api-access-p8t5f") pod "a6db602f-7a39-4cad-83c5-4c13ef73feb5" (UID: "a6db602f-7a39-4cad-83c5-4c13ef73feb5"). InnerVolumeSpecName "kube-api-access-p8t5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:19:56 crc kubenswrapper[4805]: I1203 00:19:56.906837 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6db602f-7a39-4cad-83c5-4c13ef73feb5-util" (OuterVolumeSpecName: "util") pod "a6db602f-7a39-4cad-83c5-4c13ef73feb5" (UID: "a6db602f-7a39-4cad-83c5-4c13ef73feb5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:19:56 crc kubenswrapper[4805]: I1203 00:19:56.995433 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6db602f-7a39-4cad-83c5-4c13ef73feb5-util\") on node \"crc\" DevicePath \"\"" Dec 03 00:19:56 crc kubenswrapper[4805]: I1203 00:19:56.995506 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8t5f\" (UniqueName: \"kubernetes.io/projected/a6db602f-7a39-4cad-83c5-4c13ef73feb5-kube-api-access-p8t5f\") on node \"crc\" DevicePath \"\"" Dec 03 00:19:56 crc kubenswrapper[4805]: I1203 00:19:56.995520 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6db602f-7a39-4cad-83c5-4c13ef73feb5-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.541762 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk" event={"ID":"a6db602f-7a39-4cad-83c5-4c13ef73feb5","Type":"ContainerDied","Data":"65da011ec90738ba9ee01928f9adb2f6b1de317da8aebafb5dd1b01abf7a00ba"} Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.542378 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65da011ec90738ba9ee01928f9adb2f6b1de317da8aebafb5dd1b01abf7a00ba" Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.541812 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk" Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.546264 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-df9vv" event={"ID":"e8904235-7b22-4c3a-a1c6-e107f991b631","Type":"ContainerStarted","Data":"5edcb2b00b54c65b4f6d883ca791602c04a4314e49c59cf6180192708fe88918"} Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.571608 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-df9vv" podStartSLOduration=1.9827235600000002 podStartE2EDuration="4.571589803s" podCreationTimestamp="2025-12-03 00:19:53 +0000 UTC" firstStartedPulling="2025-12-03 00:19:54.511564889 +0000 UTC m=+818.360527515" lastFinishedPulling="2025-12-03 00:19:57.100431152 +0000 UTC m=+820.949393758" observedRunningTime="2025-12-03 00:19:57.570570818 +0000 UTC m=+821.419533424" watchObservedRunningTime="2025-12-03 00:19:57.571589803 +0000 UTC m=+821.420552409" Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.602029 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr"] Dec 03 00:19:57 crc kubenswrapper[4805]: E1203 00:19:57.602472 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6db602f-7a39-4cad-83c5-4c13ef73feb5" containerName="extract" Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.602497 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6db602f-7a39-4cad-83c5-4c13ef73feb5" containerName="extract" Dec 03 00:19:57 crc kubenswrapper[4805]: E1203 00:19:57.602523 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6db602f-7a39-4cad-83c5-4c13ef73feb5" containerName="pull" Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.602534 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6db602f-7a39-4cad-83c5-4c13ef73feb5" containerName="pull" Dec 03 00:19:57 crc kubenswrapper[4805]: E1203 00:19:57.602558 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6db602f-7a39-4cad-83c5-4c13ef73feb5" containerName="util" Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.602567 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6db602f-7a39-4cad-83c5-4c13ef73feb5" containerName="util" Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.602698 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6db602f-7a39-4cad-83c5-4c13ef73feb5" containerName="extract" Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.604179 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr" Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.607839 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.612007 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr"] Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.704612 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th2dk\" (UniqueName: \"kubernetes.io/projected/11b61252-9fc1-4387-a598-411f2b3c2833-kube-api-access-th2dk\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr\" (UID: \"11b61252-9fc1-4387-a598-411f2b3c2833\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr" Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.704751 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11b61252-9fc1-4387-a598-411f2b3c2833-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr\" (UID: \"11b61252-9fc1-4387-a598-411f2b3c2833\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr" Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.704805 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11b61252-9fc1-4387-a598-411f2b3c2833-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr\" (UID: \"11b61252-9fc1-4387-a598-411f2b3c2833\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr" Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.806360 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th2dk\" (UniqueName: \"kubernetes.io/projected/11b61252-9fc1-4387-a598-411f2b3c2833-kube-api-access-th2dk\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr\" (UID: \"11b61252-9fc1-4387-a598-411f2b3c2833\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr" Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.806421 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11b61252-9fc1-4387-a598-411f2b3c2833-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr\" (UID: \"11b61252-9fc1-4387-a598-411f2b3c2833\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr" Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.806453 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11b61252-9fc1-4387-a598-411f2b3c2833-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr\" (UID: \"11b61252-9fc1-4387-a598-411f2b3c2833\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr" Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.806953 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11b61252-9fc1-4387-a598-411f2b3c2833-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr\" (UID: \"11b61252-9fc1-4387-a598-411f2b3c2833\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr" Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.807039 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11b61252-9fc1-4387-a598-411f2b3c2833-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr\" (UID: \"11b61252-9fc1-4387-a598-411f2b3c2833\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr" Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.827105 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th2dk\" (UniqueName: \"kubernetes.io/projected/11b61252-9fc1-4387-a598-411f2b3c2833-kube-api-access-th2dk\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr\" (UID: \"11b61252-9fc1-4387-a598-411f2b3c2833\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr" Dec 03 00:19:57 crc kubenswrapper[4805]: I1203 00:19:57.920058 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr" Dec 03 00:19:58 crc kubenswrapper[4805]: I1203 00:19:58.139642 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr"] Dec 03 00:19:58 crc kubenswrapper[4805]: W1203 00:19:58.150407 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11b61252_9fc1_4387_a598_411f2b3c2833.slice/crio-09a9ebf6ef62099080a29f2c84d79971dadb454e38b53785b5ded67351af44a6 WatchSource:0}: Error finding container 09a9ebf6ef62099080a29f2c84d79971dadb454e38b53785b5ded67351af44a6: Status 404 returned error can't find the container with id 09a9ebf6ef62099080a29f2c84d79971dadb454e38b53785b5ded67351af44a6 Dec 03 00:19:58 crc kubenswrapper[4805]: I1203 00:19:58.390332 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td"] Dec 03 00:19:58 crc kubenswrapper[4805]: I1203 00:19:58.392437 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td" Dec 03 00:19:58 crc kubenswrapper[4805]: I1203 00:19:58.401261 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td"] Dec 03 00:19:58 crc kubenswrapper[4805]: I1203 00:19:58.517255 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td\" (UID: \"dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td" Dec 03 00:19:58 crc kubenswrapper[4805]: I1203 00:19:58.517990 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxmtf\" (UniqueName: \"kubernetes.io/projected/dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4-kube-api-access-bxmtf\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td\" (UID: \"dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td" Dec 03 00:19:58 crc kubenswrapper[4805]: I1203 00:19:58.518334 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td\" (UID: \"dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td" Dec 03 00:19:58 crc kubenswrapper[4805]: I1203 00:19:58.554492 4805 generic.go:334] "Generic (PLEG): container finished" podID="11b61252-9fc1-4387-a598-411f2b3c2833" containerID="396d82c158c3cf175c0f7e5e85368007d0d5c9fde3dbf688849ba96a9c43f773" exitCode=0 Dec 03 00:19:58 crc kubenswrapper[4805]: I1203 00:19:58.554623 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr" event={"ID":"11b61252-9fc1-4387-a598-411f2b3c2833","Type":"ContainerDied","Data":"396d82c158c3cf175c0f7e5e85368007d0d5c9fde3dbf688849ba96a9c43f773"} Dec 03 00:19:58 crc kubenswrapper[4805]: I1203 00:19:58.554691 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr" event={"ID":"11b61252-9fc1-4387-a598-411f2b3c2833","Type":"ContainerStarted","Data":"09a9ebf6ef62099080a29f2c84d79971dadb454e38b53785b5ded67351af44a6"} Dec 03 00:19:58 crc kubenswrapper[4805]: I1203 00:19:58.619537 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxmtf\" (UniqueName: \"kubernetes.io/projected/dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4-kube-api-access-bxmtf\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td\" (UID: \"dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td" Dec 03 00:19:58 crc kubenswrapper[4805]: I1203 00:19:58.619664 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td\" (UID: \"dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td" Dec 03 00:19:58 crc kubenswrapper[4805]: I1203 00:19:58.619802 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td\" (UID: \"dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td" Dec 03 00:19:58 crc kubenswrapper[4805]: I1203 00:19:58.620740 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td\" (UID: \"dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td" Dec 03 00:19:58 crc kubenswrapper[4805]: I1203 00:19:58.620758 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td\" (UID: \"dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td" Dec 03 00:19:58 crc kubenswrapper[4805]: I1203 00:19:58.640575 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxmtf\" (UniqueName: \"kubernetes.io/projected/dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4-kube-api-access-bxmtf\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td\" (UID: \"dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td" Dec 03 00:19:58 crc kubenswrapper[4805]: I1203 00:19:58.748072 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td" Dec 03 00:19:58 crc kubenswrapper[4805]: I1203 00:19:58.983849 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td"] Dec 03 00:19:58 crc kubenswrapper[4805]: W1203 00:19:58.993638 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd3972f3_a16b_4fec_8bfd_a0bebf5c65c4.slice/crio-7b5e4c8185a3b296da427b6a66123505eb238e3ccdd14ef4fb5e58ffad61e2c2 WatchSource:0}: Error finding container 7b5e4c8185a3b296da427b6a66123505eb238e3ccdd14ef4fb5e58ffad61e2c2: Status 404 returned error can't find the container with id 7b5e4c8185a3b296da427b6a66123505eb238e3ccdd14ef4fb5e58ffad61e2c2 Dec 03 00:19:59 crc kubenswrapper[4805]: I1203 00:19:59.567024 4805 generic.go:334] "Generic (PLEG): container finished" podID="dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4" containerID="f5b8157bb487f61e35086cce3b2a5f35b0f6fc79f40b64dd756f5480100f297b" exitCode=0 Dec 03 00:19:59 crc kubenswrapper[4805]: I1203 00:19:59.567099 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td" event={"ID":"dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4","Type":"ContainerDied","Data":"f5b8157bb487f61e35086cce3b2a5f35b0f6fc79f40b64dd756f5480100f297b"} Dec 03 00:19:59 crc kubenswrapper[4805]: I1203 00:19:59.567140 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td" event={"ID":"dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4","Type":"ContainerStarted","Data":"7b5e4c8185a3b296da427b6a66123505eb238e3ccdd14ef4fb5e58ffad61e2c2"} Dec 03 00:20:00 crc kubenswrapper[4805]: I1203 00:20:00.574562 4805 generic.go:334] "Generic (PLEG): container finished" podID="11b61252-9fc1-4387-a598-411f2b3c2833" containerID="b3650767fcb39da28823ddb399accab728dffee538a5797081e89693dd19f282" exitCode=0 Dec 03 00:20:00 crc kubenswrapper[4805]: I1203 00:20:00.574669 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr" event={"ID":"11b61252-9fc1-4387-a598-411f2b3c2833","Type":"ContainerDied","Data":"b3650767fcb39da28823ddb399accab728dffee538a5797081e89693dd19f282"} Dec 03 00:20:01 crc kubenswrapper[4805]: I1203 00:20:01.583671 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr" event={"ID":"11b61252-9fc1-4387-a598-411f2b3c2833","Type":"ContainerStarted","Data":"44ff6ef61c568201679e4cc865acbfad53a966113a2364e77cf58c08768eb505"} Dec 03 00:20:01 crc kubenswrapper[4805]: I1203 00:20:01.585624 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td" event={"ID":"dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4","Type":"ContainerStarted","Data":"165d0a16326f0af70a7213df03e4632680f7fa1dfeeb3620f0b93975fba5928e"} Dec 03 00:20:01 crc kubenswrapper[4805]: I1203 00:20:01.659728 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr" podStartSLOduration=3.702840597 podStartE2EDuration="4.659699315s" podCreationTimestamp="2025-12-03 00:19:57 +0000 UTC" firstStartedPulling="2025-12-03 00:19:58.55706614 +0000 UTC m=+822.406028746" lastFinishedPulling="2025-12-03 00:19:59.513924858 +0000 UTC m=+823.362887464" observedRunningTime="2025-12-03 00:20:01.655007367 +0000 UTC m=+825.503969983" watchObservedRunningTime="2025-12-03 00:20:01.659699315 +0000 UTC m=+825.508661921" Dec 03 00:20:03 crc kubenswrapper[4805]: I1203 00:20:03.140615 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pr7vv"] Dec 03 00:20:03 crc kubenswrapper[4805]: I1203 00:20:03.142050 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pr7vv" Dec 03 00:20:03 crc kubenswrapper[4805]: I1203 00:20:03.168471 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pr7vv"] Dec 03 00:20:03 crc kubenswrapper[4805]: I1203 00:20:03.189093 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgw4k\" (UniqueName: \"kubernetes.io/projected/f6ad35fc-a27e-498d-b30e-442d3633116b-kube-api-access-qgw4k\") pod \"certified-operators-pr7vv\" (UID: \"f6ad35fc-a27e-498d-b30e-442d3633116b\") " pod="openshift-marketplace/certified-operators-pr7vv" Dec 03 00:20:03 crc kubenswrapper[4805]: I1203 00:20:03.189211 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ad35fc-a27e-498d-b30e-442d3633116b-utilities\") pod \"certified-operators-pr7vv\" (UID: \"f6ad35fc-a27e-498d-b30e-442d3633116b\") " pod="openshift-marketplace/certified-operators-pr7vv" Dec 03 00:20:03 crc kubenswrapper[4805]: I1203 00:20:03.189234 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ad35fc-a27e-498d-b30e-442d3633116b-catalog-content\") pod \"certified-operators-pr7vv\" (UID: \"f6ad35fc-a27e-498d-b30e-442d3633116b\") " pod="openshift-marketplace/certified-operators-pr7vv" Dec 03 00:20:03 crc kubenswrapper[4805]: I1203 00:20:03.290062 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ad35fc-a27e-498d-b30e-442d3633116b-utilities\") pod \"certified-operators-pr7vv\" (UID: \"f6ad35fc-a27e-498d-b30e-442d3633116b\") " pod="openshift-marketplace/certified-operators-pr7vv" Dec 03 00:20:03 crc kubenswrapper[4805]: I1203 00:20:03.290119 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ad35fc-a27e-498d-b30e-442d3633116b-catalog-content\") pod \"certified-operators-pr7vv\" (UID: \"f6ad35fc-a27e-498d-b30e-442d3633116b\") " pod="openshift-marketplace/certified-operators-pr7vv" Dec 03 00:20:03 crc kubenswrapper[4805]: I1203 00:20:03.290181 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgw4k\" (UniqueName: \"kubernetes.io/projected/f6ad35fc-a27e-498d-b30e-442d3633116b-kube-api-access-qgw4k\") pod \"certified-operators-pr7vv\" (UID: \"f6ad35fc-a27e-498d-b30e-442d3633116b\") " pod="openshift-marketplace/certified-operators-pr7vv" Dec 03 00:20:03 crc kubenswrapper[4805]: I1203 00:20:03.290757 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ad35fc-a27e-498d-b30e-442d3633116b-utilities\") pod \"certified-operators-pr7vv\" (UID: \"f6ad35fc-a27e-498d-b30e-442d3633116b\") " pod="openshift-marketplace/certified-operators-pr7vv" Dec 03 00:20:03 crc kubenswrapper[4805]: I1203 00:20:03.290944 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ad35fc-a27e-498d-b30e-442d3633116b-catalog-content\") pod \"certified-operators-pr7vv\" (UID: \"f6ad35fc-a27e-498d-b30e-442d3633116b\") " pod="openshift-marketplace/certified-operators-pr7vv" Dec 03 00:20:03 crc kubenswrapper[4805]: I1203 00:20:03.318319 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgw4k\" (UniqueName: \"kubernetes.io/projected/f6ad35fc-a27e-498d-b30e-442d3633116b-kube-api-access-qgw4k\") pod \"certified-operators-pr7vv\" (UID: \"f6ad35fc-a27e-498d-b30e-442d3633116b\") " pod="openshift-marketplace/certified-operators-pr7vv" Dec 03 00:20:03 crc kubenswrapper[4805]: I1203 00:20:03.461108 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pr7vv" Dec 03 00:20:03 crc kubenswrapper[4805]: I1203 00:20:03.608602 4805 generic.go:334] "Generic (PLEG): container finished" podID="dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4" containerID="165d0a16326f0af70a7213df03e4632680f7fa1dfeeb3620f0b93975fba5928e" exitCode=0 Dec 03 00:20:03 crc kubenswrapper[4805]: I1203 00:20:03.609160 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td" event={"ID":"dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4","Type":"ContainerDied","Data":"165d0a16326f0af70a7213df03e4632680f7fa1dfeeb3620f0b93975fba5928e"} Dec 03 00:20:03 crc kubenswrapper[4805]: I1203 00:20:03.706334 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-df9vv" Dec 03 00:20:03 crc kubenswrapper[4805]: I1203 00:20:03.707148 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-df9vv" Dec 03 00:20:03 crc kubenswrapper[4805]: I1203 00:20:03.915837 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pr7vv"] Dec 03 00:20:04 crc kubenswrapper[4805]: I1203 00:20:04.615792 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pr7vv" event={"ID":"f6ad35fc-a27e-498d-b30e-442d3633116b","Type":"ContainerStarted","Data":"eeb193cfacb40034f196e31fe37aad40914e8ab2955966c6ad2b3612d9e9156e"} Dec 03 00:20:04 crc kubenswrapper[4805]: I1203 00:20:04.616158 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pr7vv" event={"ID":"f6ad35fc-a27e-498d-b30e-442d3633116b","Type":"ContainerStarted","Data":"f66ec7775c7b85bd17589d2d46ee129dcf28f8d5512dbe0c0d470e6c2ac3c9f8"} Dec 03 00:20:04 crc kubenswrapper[4805]: I1203 00:20:04.618668 4805 generic.go:334] "Generic (PLEG): container finished" podID="11b61252-9fc1-4387-a598-411f2b3c2833" containerID="44ff6ef61c568201679e4cc865acbfad53a966113a2364e77cf58c08768eb505" exitCode=0 Dec 03 00:20:04 crc kubenswrapper[4805]: I1203 00:20:04.618719 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr" event={"ID":"11b61252-9fc1-4387-a598-411f2b3c2833","Type":"ContainerDied","Data":"44ff6ef61c568201679e4cc865acbfad53a966113a2364e77cf58c08768eb505"} Dec 03 00:20:04 crc kubenswrapper[4805]: I1203 00:20:04.621491 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td" event={"ID":"dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4","Type":"ContainerStarted","Data":"c89c6dab916bf34c66c7114dd31eab649041eff1703792ce996d33c8a099c902"} Dec 03 00:20:04 crc kubenswrapper[4805]: I1203 00:20:04.720615 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td" podStartSLOduration=5.881822301 podStartE2EDuration="6.720583461s" podCreationTimestamp="2025-12-03 00:19:58 +0000 UTC" firstStartedPulling="2025-12-03 00:19:59.570496949 +0000 UTC m=+823.419459555" lastFinishedPulling="2025-12-03 00:20:00.409258109 +0000 UTC m=+824.258220715" observedRunningTime="2025-12-03 00:20:04.716699714 +0000 UTC m=+828.565662310" watchObservedRunningTime="2025-12-03 00:20:04.720583461 +0000 UTC m=+828.569546087" Dec 03 00:20:04 crc kubenswrapper[4805]: I1203 00:20:04.774287 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-df9vv" podUID="e8904235-7b22-4c3a-a1c6-e107f991b631" containerName="registry-server" probeResult="failure" output=< Dec 03 00:20:04 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Dec 03 00:20:04 crc kubenswrapper[4805]: > Dec 03 00:20:05 crc kubenswrapper[4805]: I1203 00:20:05.629739 4805 generic.go:334] "Generic (PLEG): container finished" podID="dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4" containerID="c89c6dab916bf34c66c7114dd31eab649041eff1703792ce996d33c8a099c902" exitCode=0 Dec 03 00:20:05 crc kubenswrapper[4805]: I1203 00:20:05.629824 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td" event={"ID":"dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4","Type":"ContainerDied","Data":"c89c6dab916bf34c66c7114dd31eab649041eff1703792ce996d33c8a099c902"} Dec 03 00:20:05 crc kubenswrapper[4805]: I1203 00:20:05.633027 4805 generic.go:334] "Generic (PLEG): container finished" podID="f6ad35fc-a27e-498d-b30e-442d3633116b" containerID="eeb193cfacb40034f196e31fe37aad40914e8ab2955966c6ad2b3612d9e9156e" exitCode=0 Dec 03 00:20:05 crc kubenswrapper[4805]: I1203 00:20:05.633186 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pr7vv" event={"ID":"f6ad35fc-a27e-498d-b30e-442d3633116b","Type":"ContainerDied","Data":"eeb193cfacb40034f196e31fe37aad40914e8ab2955966c6ad2b3612d9e9156e"} Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.023819 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.138692 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th2dk\" (UniqueName: \"kubernetes.io/projected/11b61252-9fc1-4387-a598-411f2b3c2833-kube-api-access-th2dk\") pod \"11b61252-9fc1-4387-a598-411f2b3c2833\" (UID: \"11b61252-9fc1-4387-a598-411f2b3c2833\") " Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.139109 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11b61252-9fc1-4387-a598-411f2b3c2833-bundle\") pod \"11b61252-9fc1-4387-a598-411f2b3c2833\" (UID: \"11b61252-9fc1-4387-a598-411f2b3c2833\") " Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.139191 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11b61252-9fc1-4387-a598-411f2b3c2833-util\") pod \"11b61252-9fc1-4387-a598-411f2b3c2833\" (UID: \"11b61252-9fc1-4387-a598-411f2b3c2833\") " Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.139805 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11b61252-9fc1-4387-a598-411f2b3c2833-bundle" (OuterVolumeSpecName: "bundle") pod "11b61252-9fc1-4387-a598-411f2b3c2833" (UID: "11b61252-9fc1-4387-a598-411f2b3c2833"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.148556 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11b61252-9fc1-4387-a598-411f2b3c2833-kube-api-access-th2dk" (OuterVolumeSpecName: "kube-api-access-th2dk") pod "11b61252-9fc1-4387-a598-411f2b3c2833" (UID: "11b61252-9fc1-4387-a598-411f2b3c2833"). InnerVolumeSpecName "kube-api-access-th2dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.206316 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11b61252-9fc1-4387-a598-411f2b3c2833-util" (OuterVolumeSpecName: "util") pod "11b61252-9fc1-4387-a598-411f2b3c2833" (UID: "11b61252-9fc1-4387-a598-411f2b3c2833"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.241153 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11b61252-9fc1-4387-a598-411f2b3c2833-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.241188 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11b61252-9fc1-4387-a598-411f2b3c2833-util\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.241211 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th2dk\" (UniqueName: \"kubernetes.io/projected/11b61252-9fc1-4387-a598-411f2b3c2833-kube-api-access-th2dk\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.290999 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz"] Dec 03 00:20:06 crc kubenswrapper[4805]: E1203 00:20:06.291314 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b61252-9fc1-4387-a598-411f2b3c2833" containerName="extract" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.291334 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b61252-9fc1-4387-a598-411f2b3c2833" containerName="extract" Dec 03 00:20:06 crc kubenswrapper[4805]: E1203 00:20:06.291348 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b61252-9fc1-4387-a598-411f2b3c2833" containerName="util" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.291357 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b61252-9fc1-4387-a598-411f2b3c2833" containerName="util" Dec 03 00:20:06 crc kubenswrapper[4805]: E1203 00:20:06.291379 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b61252-9fc1-4387-a598-411f2b3c2833" containerName="pull" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.291389 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b61252-9fc1-4387-a598-411f2b3c2833" containerName="pull" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.291505 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="11b61252-9fc1-4387-a598-411f2b3c2833" containerName="extract" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.292498 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.319303 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz"] Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.342413 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc243a86-5fa7-4260-8c90-92eaaac927fe-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz\" (UID: \"fc243a86-5fa7-4260-8c90-92eaaac927fe\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.342494 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgqpr\" (UniqueName: \"kubernetes.io/projected/fc243a86-5fa7-4260-8c90-92eaaac927fe-kube-api-access-jgqpr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz\" (UID: \"fc243a86-5fa7-4260-8c90-92eaaac927fe\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.342578 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc243a86-5fa7-4260-8c90-92eaaac927fe-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz\" (UID: \"fc243a86-5fa7-4260-8c90-92eaaac927fe\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.443820 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc243a86-5fa7-4260-8c90-92eaaac927fe-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz\" (UID: \"fc243a86-5fa7-4260-8c90-92eaaac927fe\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.443902 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgqpr\" (UniqueName: \"kubernetes.io/projected/fc243a86-5fa7-4260-8c90-92eaaac927fe-kube-api-access-jgqpr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz\" (UID: \"fc243a86-5fa7-4260-8c90-92eaaac927fe\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.443995 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc243a86-5fa7-4260-8c90-92eaaac927fe-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz\" (UID: \"fc243a86-5fa7-4260-8c90-92eaaac927fe\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.444439 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc243a86-5fa7-4260-8c90-92eaaac927fe-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz\" (UID: \"fc243a86-5fa7-4260-8c90-92eaaac927fe\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.444533 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc243a86-5fa7-4260-8c90-92eaaac927fe-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz\" (UID: \"fc243a86-5fa7-4260-8c90-92eaaac927fe\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.507411 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgqpr\" (UniqueName: \"kubernetes.io/projected/fc243a86-5fa7-4260-8c90-92eaaac927fe-kube-api-access-jgqpr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz\" (UID: \"fc243a86-5fa7-4260-8c90-92eaaac927fe\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.610026 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.646099 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr" event={"ID":"11b61252-9fc1-4387-a598-411f2b3c2833","Type":"ContainerDied","Data":"09a9ebf6ef62099080a29f2c84d79971dadb454e38b53785b5ded67351af44a6"} Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.646583 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09a9ebf6ef62099080a29f2c84d79971dadb454e38b53785b5ded67351af44a6" Dec 03 00:20:06 crc kubenswrapper[4805]: I1203 00:20:06.646177 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr" Dec 03 00:20:07 crc kubenswrapper[4805]: I1203 00:20:07.147113 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td" Dec 03 00:20:07 crc kubenswrapper[4805]: I1203 00:20:07.231115 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz"] Dec 03 00:20:07 crc kubenswrapper[4805]: I1203 00:20:07.254517 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4-util\") pod \"dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4\" (UID: \"dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4\") " Dec 03 00:20:07 crc kubenswrapper[4805]: I1203 00:20:07.254634 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxmtf\" (UniqueName: \"kubernetes.io/projected/dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4-kube-api-access-bxmtf\") pod \"dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4\" (UID: \"dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4\") " Dec 03 00:20:07 crc kubenswrapper[4805]: I1203 00:20:07.254766 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4-bundle\") pod \"dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4\" (UID: \"dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4\") " Dec 03 00:20:07 crc kubenswrapper[4805]: I1203 00:20:07.256109 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4-bundle" (OuterVolumeSpecName: "bundle") pod "dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4" (UID: "dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:07 crc kubenswrapper[4805]: I1203 00:20:07.272930 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4-kube-api-access-bxmtf" (OuterVolumeSpecName: "kube-api-access-bxmtf") pod "dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4" (UID: "dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4"). InnerVolumeSpecName "kube-api-access-bxmtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:20:07 crc kubenswrapper[4805]: I1203 00:20:07.274337 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4-util" (OuterVolumeSpecName: "util") pod "dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4" (UID: "dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:07 crc kubenswrapper[4805]: I1203 00:20:07.356213 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:07 crc kubenswrapper[4805]: I1203 00:20:07.356258 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4-util\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:07 crc kubenswrapper[4805]: I1203 00:20:07.356270 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxmtf\" (UniqueName: \"kubernetes.io/projected/dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4-kube-api-access-bxmtf\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:07 crc kubenswrapper[4805]: I1203 00:20:07.653023 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz" event={"ID":"fc243a86-5fa7-4260-8c90-92eaaac927fe","Type":"ContainerStarted","Data":"a529d1689ed109c190e5ddba2661eb2cbe63599ec81f0e82f79650bd80b6c7fe"} Dec 03 00:20:07 crc kubenswrapper[4805]: I1203 00:20:07.653460 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz" event={"ID":"fc243a86-5fa7-4260-8c90-92eaaac927fe","Type":"ContainerStarted","Data":"304aa83bb9541634187897a9f7cc2b88ef4f4224083dc141e00445ddb8809d7a"} Dec 03 00:20:07 crc kubenswrapper[4805]: I1203 00:20:07.654997 4805 generic.go:334] "Generic (PLEG): container finished" podID="f6ad35fc-a27e-498d-b30e-442d3633116b" containerID="c8aaca849655d398d5f448264e72f9dd490d402cb92ee664f2721dd81799745c" exitCode=0 Dec 03 00:20:07 crc kubenswrapper[4805]: I1203 00:20:07.655109 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pr7vv" event={"ID":"f6ad35fc-a27e-498d-b30e-442d3633116b","Type":"ContainerDied","Data":"c8aaca849655d398d5f448264e72f9dd490d402cb92ee664f2721dd81799745c"} Dec 03 00:20:07 crc kubenswrapper[4805]: I1203 00:20:07.662830 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td" event={"ID":"dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4","Type":"ContainerDied","Data":"7b5e4c8185a3b296da427b6a66123505eb238e3ccdd14ef4fb5e58ffad61e2c2"} Dec 03 00:20:07 crc kubenswrapper[4805]: I1203 00:20:07.662873 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b5e4c8185a3b296da427b6a66123505eb238e3ccdd14ef4fb5e58ffad61e2c2" Dec 03 00:20:07 crc kubenswrapper[4805]: I1203 00:20:07.662949 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.007356 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-n4dq8"] Dec 03 00:20:08 crc kubenswrapper[4805]: E1203 00:20:08.007898 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4" containerName="pull" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.007912 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4" containerName="pull" Dec 03 00:20:08 crc kubenswrapper[4805]: E1203 00:20:08.007921 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4" containerName="extract" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.007927 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4" containerName="extract" Dec 03 00:20:08 crc kubenswrapper[4805]: E1203 00:20:08.007939 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4" containerName="util" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.007946 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4" containerName="util" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.008093 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4" containerName="extract" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.008570 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-n4dq8" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.012081 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-4274s" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.017324 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.020263 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.033474 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-n4dq8"] Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.066893 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcr7b\" (UniqueName: \"kubernetes.io/projected/164269e7-9f89-47f8-b363-bcb620782a98-kube-api-access-qcr7b\") pod \"obo-prometheus-operator-668cf9dfbb-n4dq8\" (UID: \"164269e7-9f89-47f8-b363-bcb620782a98\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-n4dq8" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.126633 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq"] Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.127604 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.132959 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.133128 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-sxtld" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.144899 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq"] Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.148427 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl"] Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.149275 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.168225 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dd60859c-d506-402e-90b3-22e44a9cde9a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq\" (UID: \"dd60859c-d506-402e-90b3-22e44a9cde9a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.168298 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dd60859c-d506-402e-90b3-22e44a9cde9a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq\" (UID: \"dd60859c-d506-402e-90b3-22e44a9cde9a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.168343 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcr7b\" (UniqueName: \"kubernetes.io/projected/164269e7-9f89-47f8-b363-bcb620782a98-kube-api-access-qcr7b\") pod \"obo-prometheus-operator-668cf9dfbb-n4dq8\" (UID: \"164269e7-9f89-47f8-b363-bcb620782a98\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-n4dq8" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.178527 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl"] Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.191261 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcr7b\" (UniqueName: \"kubernetes.io/projected/164269e7-9f89-47f8-b363-bcb620782a98-kube-api-access-qcr7b\") pod \"obo-prometheus-operator-668cf9dfbb-n4dq8\" (UID: \"164269e7-9f89-47f8-b363-bcb620782a98\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-n4dq8" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.269364 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c8d0d1b6-817f-4d74-8373-1e186de34888-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl\" (UID: \"c8d0d1b6-817f-4d74-8373-1e186de34888\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.269507 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c8d0d1b6-817f-4d74-8373-1e186de34888-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl\" (UID: \"c8d0d1b6-817f-4d74-8373-1e186de34888\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.269591 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dd60859c-d506-402e-90b3-22e44a9cde9a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq\" (UID: \"dd60859c-d506-402e-90b3-22e44a9cde9a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.269688 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dd60859c-d506-402e-90b3-22e44a9cde9a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq\" (UID: \"dd60859c-d506-402e-90b3-22e44a9cde9a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.272646 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dd60859c-d506-402e-90b3-22e44a9cde9a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq\" (UID: \"dd60859c-d506-402e-90b3-22e44a9cde9a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.273977 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dd60859c-d506-402e-90b3-22e44a9cde9a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq\" (UID: \"dd60859c-d506-402e-90b3-22e44a9cde9a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.323900 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-n4dq8" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.332647 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-sg4sp"] Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.333626 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-sg4sp" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.335816 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-h8xjh" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.336408 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.352413 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-sg4sp"] Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.397300 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbkxn\" (UniqueName: \"kubernetes.io/projected/283d9c09-ce9f-43b1-9849-49926b74fdb2-kube-api-access-hbkxn\") pod \"observability-operator-d8bb48f5d-sg4sp\" (UID: \"283d9c09-ce9f-43b1-9849-49926b74fdb2\") " pod="openshift-operators/observability-operator-d8bb48f5d-sg4sp" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.397383 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c8d0d1b6-817f-4d74-8373-1e186de34888-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl\" (UID: \"c8d0d1b6-817f-4d74-8373-1e186de34888\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.397427 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c8d0d1b6-817f-4d74-8373-1e186de34888-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl\" (UID: \"c8d0d1b6-817f-4d74-8373-1e186de34888\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.397458 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/283d9c09-ce9f-43b1-9849-49926b74fdb2-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-sg4sp\" (UID: \"283d9c09-ce9f-43b1-9849-49926b74fdb2\") " pod="openshift-operators/observability-operator-d8bb48f5d-sg4sp" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.403014 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c8d0d1b6-817f-4d74-8373-1e186de34888-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl\" (UID: \"c8d0d1b6-817f-4d74-8373-1e186de34888\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.403446 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c8d0d1b6-817f-4d74-8373-1e186de34888-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl\" (UID: \"c8d0d1b6-817f-4d74-8373-1e186de34888\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.444522 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.464355 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.483434 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-zb45p"] Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.484315 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-zb45p" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.488371 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-84qdz" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.498753 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/283d9c09-ce9f-43b1-9849-49926b74fdb2-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-sg4sp\" (UID: \"283d9c09-ce9f-43b1-9849-49926b74fdb2\") " pod="openshift-operators/observability-operator-d8bb48f5d-sg4sp" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.498857 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbkxn\" (UniqueName: \"kubernetes.io/projected/283d9c09-ce9f-43b1-9849-49926b74fdb2-kube-api-access-hbkxn\") pod \"observability-operator-d8bb48f5d-sg4sp\" (UID: \"283d9c09-ce9f-43b1-9849-49926b74fdb2\") " pod="openshift-operators/observability-operator-d8bb48f5d-sg4sp" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.499595 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-zb45p"] Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.507969 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/283d9c09-ce9f-43b1-9849-49926b74fdb2-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-sg4sp\" (UID: \"283d9c09-ce9f-43b1-9849-49926b74fdb2\") " pod="openshift-operators/observability-operator-d8bb48f5d-sg4sp" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.525832 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbkxn\" (UniqueName: \"kubernetes.io/projected/283d9c09-ce9f-43b1-9849-49926b74fdb2-kube-api-access-hbkxn\") pod \"observability-operator-d8bb48f5d-sg4sp\" (UID: \"283d9c09-ce9f-43b1-9849-49926b74fdb2\") " pod="openshift-operators/observability-operator-d8bb48f5d-sg4sp" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.604775 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74tvf\" (UniqueName: \"kubernetes.io/projected/c1a67691-4899-4efb-92fd-8e374caac92f-kube-api-access-74tvf\") pod \"perses-operator-5446b9c989-zb45p\" (UID: \"c1a67691-4899-4efb-92fd-8e374caac92f\") " pod="openshift-operators/perses-operator-5446b9c989-zb45p" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.604834 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c1a67691-4899-4efb-92fd-8e374caac92f-openshift-service-ca\") pod \"perses-operator-5446b9c989-zb45p\" (UID: \"c1a67691-4899-4efb-92fd-8e374caac92f\") " pod="openshift-operators/perses-operator-5446b9c989-zb45p" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.674623 4805 generic.go:334] "Generic (PLEG): container finished" podID="fc243a86-5fa7-4260-8c90-92eaaac927fe" containerID="a529d1689ed109c190e5ddba2661eb2cbe63599ec81f0e82f79650bd80b6c7fe" exitCode=0 Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.674693 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz" event={"ID":"fc243a86-5fa7-4260-8c90-92eaaac927fe","Type":"ContainerDied","Data":"a529d1689ed109c190e5ddba2661eb2cbe63599ec81f0e82f79650bd80b6c7fe"} Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.685664 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pr7vv" event={"ID":"f6ad35fc-a27e-498d-b30e-442d3633116b","Type":"ContainerStarted","Data":"173a80514fc62426d6b0a3016a3b21f71ad5d4d37bd8d2b24811a83c483ef636"} Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.705781 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74tvf\" (UniqueName: \"kubernetes.io/projected/c1a67691-4899-4efb-92fd-8e374caac92f-kube-api-access-74tvf\") pod \"perses-operator-5446b9c989-zb45p\" (UID: \"c1a67691-4899-4efb-92fd-8e374caac92f\") " pod="openshift-operators/perses-operator-5446b9c989-zb45p" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.706138 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c1a67691-4899-4efb-92fd-8e374caac92f-openshift-service-ca\") pod \"perses-operator-5446b9c989-zb45p\" (UID: \"c1a67691-4899-4efb-92fd-8e374caac92f\") " pod="openshift-operators/perses-operator-5446b9c989-zb45p" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.707082 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c1a67691-4899-4efb-92fd-8e374caac92f-openshift-service-ca\") pod \"perses-operator-5446b9c989-zb45p\" (UID: \"c1a67691-4899-4efb-92fd-8e374caac92f\") " pod="openshift-operators/perses-operator-5446b9c989-zb45p" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.726299 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74tvf\" (UniqueName: \"kubernetes.io/projected/c1a67691-4899-4efb-92fd-8e374caac92f-kube-api-access-74tvf\") pod \"perses-operator-5446b9c989-zb45p\" (UID: \"c1a67691-4899-4efb-92fd-8e374caac92f\") " pod="openshift-operators/perses-operator-5446b9c989-zb45p" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.730024 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-sg4sp" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.815361 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-zb45p" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.857983 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pr7vv" podStartSLOduration=3.282053964 podStartE2EDuration="5.857961001s" podCreationTimestamp="2025-12-03 00:20:03 +0000 UTC" firstStartedPulling="2025-12-03 00:20:05.638066739 +0000 UTC m=+829.487029345" lastFinishedPulling="2025-12-03 00:20:08.213973776 +0000 UTC m=+832.062936382" observedRunningTime="2025-12-03 00:20:08.730161549 +0000 UTC m=+832.579124155" watchObservedRunningTime="2025-12-03 00:20:08.857961001 +0000 UTC m=+832.706923607" Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.867211 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq"] Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.956654 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl"] Dec 03 00:20:08 crc kubenswrapper[4805]: I1203 00:20:08.982643 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-n4dq8"] Dec 03 00:20:09 crc kubenswrapper[4805]: I1203 00:20:09.147096 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-sg4sp"] Dec 03 00:20:09 crc kubenswrapper[4805]: W1203 00:20:09.169474 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod283d9c09_ce9f_43b1_9849_49926b74fdb2.slice/crio-3e89b1ace1396a006f8fc87dbfe8a7dc72cf3c9526aefc940b455307a3976be1 WatchSource:0}: Error finding container 3e89b1ace1396a006f8fc87dbfe8a7dc72cf3c9526aefc940b455307a3976be1: Status 404 returned error can't find the container with id 3e89b1ace1396a006f8fc87dbfe8a7dc72cf3c9526aefc940b455307a3976be1 Dec 03 00:20:09 crc kubenswrapper[4805]: I1203 00:20:09.283725 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-zb45p"] Dec 03 00:20:09 crc kubenswrapper[4805]: I1203 00:20:09.693560 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq" event={"ID":"dd60859c-d506-402e-90b3-22e44a9cde9a","Type":"ContainerStarted","Data":"5e300084dca716dbb561bbbde04c12f2b4db6936ff94ccbf86e389fb875fc990"} Dec 03 00:20:09 crc kubenswrapper[4805]: I1203 00:20:09.695720 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-zb45p" event={"ID":"c1a67691-4899-4efb-92fd-8e374caac92f","Type":"ContainerStarted","Data":"9984865a32544a63ab95fde6f91cb553c484fe1d51ae23f99f2b62663d876071"} Dec 03 00:20:09 crc kubenswrapper[4805]: I1203 00:20:09.697543 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-sg4sp" event={"ID":"283d9c09-ce9f-43b1-9849-49926b74fdb2","Type":"ContainerStarted","Data":"3e89b1ace1396a006f8fc87dbfe8a7dc72cf3c9526aefc940b455307a3976be1"} Dec 03 00:20:09 crc kubenswrapper[4805]: I1203 00:20:09.699090 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-n4dq8" event={"ID":"164269e7-9f89-47f8-b363-bcb620782a98","Type":"ContainerStarted","Data":"4b157112a567c9646abd33a6cfe4d14cc9395a568288117560ba5b9d797049f0"} Dec 03 00:20:09 crc kubenswrapper[4805]: I1203 00:20:09.702511 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl" event={"ID":"c8d0d1b6-817f-4d74-8373-1e186de34888","Type":"ContainerStarted","Data":"4cb8b9be37ed26354d118f60848a3d4c0aa24e6cc36a7d48b5b4df78047206c3"} Dec 03 00:20:13 crc kubenswrapper[4805]: I1203 00:20:13.463880 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pr7vv" Dec 03 00:20:13 crc kubenswrapper[4805]: I1203 00:20:13.464349 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pr7vv" Dec 03 00:20:13 crc kubenswrapper[4805]: I1203 00:20:13.547230 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pr7vv" Dec 03 00:20:13 crc kubenswrapper[4805]: I1203 00:20:13.814698 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-df9vv" Dec 03 00:20:13 crc kubenswrapper[4805]: I1203 00:20:13.861933 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pr7vv" Dec 03 00:20:13 crc kubenswrapper[4805]: I1203 00:20:13.993239 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-df9vv" Dec 03 00:20:15 crc kubenswrapper[4805]: I1203 00:20:15.617550 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-679d56f588-9lmbh"] Dec 03 00:20:15 crc kubenswrapper[4805]: I1203 00:20:15.618348 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-679d56f588-9lmbh" Dec 03 00:20:15 crc kubenswrapper[4805]: I1203 00:20:15.622064 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Dec 03 00:20:15 crc kubenswrapper[4805]: I1203 00:20:15.622133 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-wd2fv" Dec 03 00:20:15 crc kubenswrapper[4805]: I1203 00:20:15.622167 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Dec 03 00:20:15 crc kubenswrapper[4805]: I1203 00:20:15.622477 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Dec 03 00:20:15 crc kubenswrapper[4805]: I1203 00:20:15.646354 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-679d56f588-9lmbh"] Dec 03 00:20:15 crc kubenswrapper[4805]: I1203 00:20:15.765700 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cd1d345d-e19c-499d-8f85-0f02ba3e9a4f-apiservice-cert\") pod \"elastic-operator-679d56f588-9lmbh\" (UID: \"cd1d345d-e19c-499d-8f85-0f02ba3e9a4f\") " pod="service-telemetry/elastic-operator-679d56f588-9lmbh" Dec 03 00:20:15 crc kubenswrapper[4805]: I1203 00:20:15.765836 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnrw6\" (UniqueName: \"kubernetes.io/projected/cd1d345d-e19c-499d-8f85-0f02ba3e9a4f-kube-api-access-nnrw6\") pod \"elastic-operator-679d56f588-9lmbh\" (UID: \"cd1d345d-e19c-499d-8f85-0f02ba3e9a4f\") " pod="service-telemetry/elastic-operator-679d56f588-9lmbh" Dec 03 00:20:15 crc kubenswrapper[4805]: I1203 00:20:15.765886 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cd1d345d-e19c-499d-8f85-0f02ba3e9a4f-webhook-cert\") pod \"elastic-operator-679d56f588-9lmbh\" (UID: \"cd1d345d-e19c-499d-8f85-0f02ba3e9a4f\") " pod="service-telemetry/elastic-operator-679d56f588-9lmbh" Dec 03 00:20:15 crc kubenswrapper[4805]: I1203 00:20:15.867782 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cd1d345d-e19c-499d-8f85-0f02ba3e9a4f-apiservice-cert\") pod \"elastic-operator-679d56f588-9lmbh\" (UID: \"cd1d345d-e19c-499d-8f85-0f02ba3e9a4f\") " pod="service-telemetry/elastic-operator-679d56f588-9lmbh" Dec 03 00:20:15 crc kubenswrapper[4805]: I1203 00:20:15.867865 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnrw6\" (UniqueName: \"kubernetes.io/projected/cd1d345d-e19c-499d-8f85-0f02ba3e9a4f-kube-api-access-nnrw6\") pod \"elastic-operator-679d56f588-9lmbh\" (UID: \"cd1d345d-e19c-499d-8f85-0f02ba3e9a4f\") " pod="service-telemetry/elastic-operator-679d56f588-9lmbh" Dec 03 00:20:15 crc kubenswrapper[4805]: I1203 00:20:15.867900 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cd1d345d-e19c-499d-8f85-0f02ba3e9a4f-webhook-cert\") pod \"elastic-operator-679d56f588-9lmbh\" (UID: \"cd1d345d-e19c-499d-8f85-0f02ba3e9a4f\") " pod="service-telemetry/elastic-operator-679d56f588-9lmbh" Dec 03 00:20:15 crc kubenswrapper[4805]: I1203 00:20:15.876574 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cd1d345d-e19c-499d-8f85-0f02ba3e9a4f-apiservice-cert\") pod \"elastic-operator-679d56f588-9lmbh\" (UID: \"cd1d345d-e19c-499d-8f85-0f02ba3e9a4f\") " pod="service-telemetry/elastic-operator-679d56f588-9lmbh" Dec 03 00:20:15 crc kubenswrapper[4805]: I1203 00:20:15.877377 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cd1d345d-e19c-499d-8f85-0f02ba3e9a4f-webhook-cert\") pod \"elastic-operator-679d56f588-9lmbh\" (UID: \"cd1d345d-e19c-499d-8f85-0f02ba3e9a4f\") " pod="service-telemetry/elastic-operator-679d56f588-9lmbh" Dec 03 00:20:15 crc kubenswrapper[4805]: I1203 00:20:15.893277 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnrw6\" (UniqueName: \"kubernetes.io/projected/cd1d345d-e19c-499d-8f85-0f02ba3e9a4f-kube-api-access-nnrw6\") pod \"elastic-operator-679d56f588-9lmbh\" (UID: \"cd1d345d-e19c-499d-8f85-0f02ba3e9a4f\") " pod="service-telemetry/elastic-operator-679d56f588-9lmbh" Dec 03 00:20:15 crc kubenswrapper[4805]: I1203 00:20:15.938827 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-679d56f588-9lmbh" Dec 03 00:20:16 crc kubenswrapper[4805]: I1203 00:20:16.634761 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-t7p89"] Dec 03 00:20:16 crc kubenswrapper[4805]: I1203 00:20:16.635808 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-t7p89" Dec 03 00:20:16 crc kubenswrapper[4805]: I1203 00:20:16.639930 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-t7p89"] Dec 03 00:20:16 crc kubenswrapper[4805]: I1203 00:20:16.640295 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-4z5w5" Dec 03 00:20:16 crc kubenswrapper[4805]: I1203 00:20:16.696608 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjfg5\" (UniqueName: \"kubernetes.io/projected/f5c66118-0eeb-41d4-9c90-21745892df11-kube-api-access-jjfg5\") pod \"interconnect-operator-5bb49f789d-t7p89\" (UID: \"f5c66118-0eeb-41d4-9c90-21745892df11\") " pod="service-telemetry/interconnect-operator-5bb49f789d-t7p89" Dec 03 00:20:16 crc kubenswrapper[4805]: I1203 00:20:16.799121 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjfg5\" (UniqueName: \"kubernetes.io/projected/f5c66118-0eeb-41d4-9c90-21745892df11-kube-api-access-jjfg5\") pod \"interconnect-operator-5bb49f789d-t7p89\" (UID: \"f5c66118-0eeb-41d4-9c90-21745892df11\") " pod="service-telemetry/interconnect-operator-5bb49f789d-t7p89" Dec 03 00:20:16 crc kubenswrapper[4805]: I1203 00:20:16.834387 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjfg5\" (UniqueName: \"kubernetes.io/projected/f5c66118-0eeb-41d4-9c90-21745892df11-kube-api-access-jjfg5\") pod \"interconnect-operator-5bb49f789d-t7p89\" (UID: \"f5c66118-0eeb-41d4-9c90-21745892df11\") " pod="service-telemetry/interconnect-operator-5bb49f789d-t7p89" Dec 03 00:20:16 crc kubenswrapper[4805]: I1203 00:20:16.961348 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-t7p89" Dec 03 00:20:17 crc kubenswrapper[4805]: I1203 00:20:17.811961 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:20:17 crc kubenswrapper[4805]: I1203 00:20:17.812507 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:20:17 crc kubenswrapper[4805]: I1203 00:20:17.812581 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:20:17 crc kubenswrapper[4805]: I1203 00:20:17.813432 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1e997872ae5cc6752d7b081f3a651fb9d62664b89bdfb81c87803944fd10204"} pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:20:17 crc kubenswrapper[4805]: I1203 00:20:17.813509 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" containerID="cri-o://c1e997872ae5cc6752d7b081f3a651fb9d62664b89bdfb81c87803944fd10204" gracePeriod=600 Dec 03 00:20:17 crc kubenswrapper[4805]: I1203 00:20:17.936340 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pr7vv"] Dec 03 00:20:17 crc kubenswrapper[4805]: I1203 00:20:17.936619 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pr7vv" podUID="f6ad35fc-a27e-498d-b30e-442d3633116b" containerName="registry-server" containerID="cri-o://173a80514fc62426d6b0a3016a3b21f71ad5d4d37bd8d2b24811a83c483ef636" gracePeriod=2 Dec 03 00:20:18 crc kubenswrapper[4805]: I1203 00:20:18.540924 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-df9vv"] Dec 03 00:20:18 crc kubenswrapper[4805]: I1203 00:20:18.541231 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-df9vv" podUID="e8904235-7b22-4c3a-a1c6-e107f991b631" containerName="registry-server" containerID="cri-o://5edcb2b00b54c65b4f6d883ca791602c04a4314e49c59cf6180192708fe88918" gracePeriod=2 Dec 03 00:20:18 crc kubenswrapper[4805]: I1203 00:20:18.838779 4805 generic.go:334] "Generic (PLEG): container finished" podID="f6ad35fc-a27e-498d-b30e-442d3633116b" containerID="173a80514fc62426d6b0a3016a3b21f71ad5d4d37bd8d2b24811a83c483ef636" exitCode=0 Dec 03 00:20:18 crc kubenswrapper[4805]: I1203 00:20:18.838874 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pr7vv" event={"ID":"f6ad35fc-a27e-498d-b30e-442d3633116b","Type":"ContainerDied","Data":"173a80514fc62426d6b0a3016a3b21f71ad5d4d37bd8d2b24811a83c483ef636"} Dec 03 00:20:18 crc kubenswrapper[4805]: I1203 00:20:18.874047 4805 generic.go:334] "Generic (PLEG): container finished" podID="e8904235-7b22-4c3a-a1c6-e107f991b631" containerID="5edcb2b00b54c65b4f6d883ca791602c04a4314e49c59cf6180192708fe88918" exitCode=0 Dec 03 00:20:18 crc kubenswrapper[4805]: I1203 00:20:18.874181 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-df9vv" event={"ID":"e8904235-7b22-4c3a-a1c6-e107f991b631","Type":"ContainerDied","Data":"5edcb2b00b54c65b4f6d883ca791602c04a4314e49c59cf6180192708fe88918"} Dec 03 00:20:18 crc kubenswrapper[4805]: I1203 00:20:18.877624 4805 generic.go:334] "Generic (PLEG): container finished" podID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerID="c1e997872ae5cc6752d7b081f3a651fb9d62664b89bdfb81c87803944fd10204" exitCode=0 Dec 03 00:20:18 crc kubenswrapper[4805]: I1203 00:20:18.877683 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" event={"ID":"42d6da4d-d781-4243-b5c3-28a8cf91ef53","Type":"ContainerDied","Data":"c1e997872ae5cc6752d7b081f3a651fb9d62664b89bdfb81c87803944fd10204"} Dec 03 00:20:18 crc kubenswrapper[4805]: I1203 00:20:18.877754 4805 scope.go:117] "RemoveContainer" containerID="2d57f60c8e52a89583b1e40f506517f73a5b87757f993a4d20080eabc8d60d72" Dec 03 00:20:23 crc kubenswrapper[4805]: E1203 00:20:23.462928 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 173a80514fc62426d6b0a3016a3b21f71ad5d4d37bd8d2b24811a83c483ef636 is running failed: container process not found" containerID="173a80514fc62426d6b0a3016a3b21f71ad5d4d37bd8d2b24811a83c483ef636" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 00:20:23 crc kubenswrapper[4805]: E1203 00:20:23.463909 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 173a80514fc62426d6b0a3016a3b21f71ad5d4d37bd8d2b24811a83c483ef636 is running failed: container process not found" containerID="173a80514fc62426d6b0a3016a3b21f71ad5d4d37bd8d2b24811a83c483ef636" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 00:20:23 crc kubenswrapper[4805]: E1203 00:20:23.464536 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 173a80514fc62426d6b0a3016a3b21f71ad5d4d37bd8d2b24811a83c483ef636 is running failed: container process not found" containerID="173a80514fc62426d6b0a3016a3b21f71ad5d4d37bd8d2b24811a83c483ef636" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 00:20:23 crc kubenswrapper[4805]: E1203 00:20:23.464636 4805 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 173a80514fc62426d6b0a3016a3b21f71ad5d4d37bd8d2b24811a83c483ef636 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-pr7vv" podUID="f6ad35fc-a27e-498d-b30e-442d3633116b" containerName="registry-server" Dec 03 00:20:23 crc kubenswrapper[4805]: E1203 00:20:23.702623 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5edcb2b00b54c65b4f6d883ca791602c04a4314e49c59cf6180192708fe88918 is running failed: container process not found" containerID="5edcb2b00b54c65b4f6d883ca791602c04a4314e49c59cf6180192708fe88918" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 00:20:23 crc kubenswrapper[4805]: E1203 00:20:23.703422 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5edcb2b00b54c65b4f6d883ca791602c04a4314e49c59cf6180192708fe88918 is running failed: container process not found" containerID="5edcb2b00b54c65b4f6d883ca791602c04a4314e49c59cf6180192708fe88918" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 00:20:23 crc kubenswrapper[4805]: E1203 00:20:23.703765 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5edcb2b00b54c65b4f6d883ca791602c04a4314e49c59cf6180192708fe88918 is running failed: container process not found" containerID="5edcb2b00b54c65b4f6d883ca791602c04a4314e49c59cf6180192708fe88918" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 00:20:23 crc kubenswrapper[4805]: E1203 00:20:23.703794 4805 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5edcb2b00b54c65b4f6d883ca791602c04a4314e49c59cf6180192708fe88918 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-df9vv" podUID="e8904235-7b22-4c3a-a1c6-e107f991b631" containerName="registry-server" Dec 03 00:20:29 crc kubenswrapper[4805]: E1203 00:20:29.297749 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Dec 03 00:20:29 crc kubenswrapper[4805]: E1203 00:20:29.298549 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl_openshift-operators(c8d0d1b6-817f-4d74-8373-1e186de34888): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:20:29 crc kubenswrapper[4805]: E1203 00:20:29.299694 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl" podUID="c8d0d1b6-817f-4d74-8373-1e186de34888" Dec 03 00:20:29 crc kubenswrapper[4805]: E1203 00:20:29.325310 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Dec 03 00:20:29 crc kubenswrapper[4805]: E1203 00:20:29.325520 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq_openshift-operators(dd60859c-d506-402e-90b3-22e44a9cde9a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:20:29 crc kubenswrapper[4805]: E1203 00:20:29.326729 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq" podUID="dd60859c-d506-402e-90b3-22e44a9cde9a" Dec 03 00:20:29 crc kubenswrapper[4805]: E1203 00:20:29.954762 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl" podUID="c8d0d1b6-817f-4d74-8373-1e186de34888" Dec 03 00:20:29 crc kubenswrapper[4805]: E1203 00:20:29.955966 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq" podUID="dd60859c-d506-402e-90b3-22e44a9cde9a" Dec 03 00:20:30 crc kubenswrapper[4805]: E1203 00:20:30.434176 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3" Dec 03 00:20:30 crc kubenswrapper[4805]: E1203 00:20:30.434455 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qcr7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-668cf9dfbb-n4dq8_openshift-operators(164269e7-9f89-47f8-b363-bcb620782a98): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:20:30 crc kubenswrapper[4805]: E1203 00:20:30.435588 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-n4dq8" podUID="164269e7-9f89-47f8-b363-bcb620782a98" Dec 03 00:20:30 crc kubenswrapper[4805]: E1203 00:20:30.961705 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3\\\"\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-n4dq8" podUID="164269e7-9f89-47f8-b363-bcb620782a98" Dec 03 00:20:31 crc kubenswrapper[4805]: E1203 00:20:31.253253 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385" Dec 03 00:20:31 crc kubenswrapper[4805]: E1203 00:20:31.254015 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:perses-operator,Image:registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openshift-service-ca,ReadOnly:true,MountPath:/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-74tvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod perses-operator-5446b9c989-zb45p_openshift-operators(c1a67691-4899-4efb-92fd-8e374caac92f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:20:31 crc kubenswrapper[4805]: E1203 00:20:31.255222 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/perses-operator-5446b9c989-zb45p" podUID="c1a67691-4899-4efb-92fd-8e374caac92f" Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.328545 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-df9vv" Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.394536 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pr7vv" Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.458888 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8904235-7b22-4c3a-a1c6-e107f991b631-catalog-content\") pod \"e8904235-7b22-4c3a-a1c6-e107f991b631\" (UID: \"e8904235-7b22-4c3a-a1c6-e107f991b631\") " Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.458962 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf68s\" (UniqueName: \"kubernetes.io/projected/e8904235-7b22-4c3a-a1c6-e107f991b631-kube-api-access-vf68s\") pod \"e8904235-7b22-4c3a-a1c6-e107f991b631\" (UID: \"e8904235-7b22-4c3a-a1c6-e107f991b631\") " Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.459023 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8904235-7b22-4c3a-a1c6-e107f991b631-utilities\") pod \"e8904235-7b22-4c3a-a1c6-e107f991b631\" (UID: \"e8904235-7b22-4c3a-a1c6-e107f991b631\") " Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.460133 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8904235-7b22-4c3a-a1c6-e107f991b631-utilities" (OuterVolumeSpecName: "utilities") pod "e8904235-7b22-4c3a-a1c6-e107f991b631" (UID: "e8904235-7b22-4c3a-a1c6-e107f991b631"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.485753 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8904235-7b22-4c3a-a1c6-e107f991b631-kube-api-access-vf68s" (OuterVolumeSpecName: "kube-api-access-vf68s") pod "e8904235-7b22-4c3a-a1c6-e107f991b631" (UID: "e8904235-7b22-4c3a-a1c6-e107f991b631"). InnerVolumeSpecName "kube-api-access-vf68s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.561419 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgw4k\" (UniqueName: \"kubernetes.io/projected/f6ad35fc-a27e-498d-b30e-442d3633116b-kube-api-access-qgw4k\") pod \"f6ad35fc-a27e-498d-b30e-442d3633116b\" (UID: \"f6ad35fc-a27e-498d-b30e-442d3633116b\") " Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.561468 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ad35fc-a27e-498d-b30e-442d3633116b-utilities\") pod \"f6ad35fc-a27e-498d-b30e-442d3633116b\" (UID: \"f6ad35fc-a27e-498d-b30e-442d3633116b\") " Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.561544 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ad35fc-a27e-498d-b30e-442d3633116b-catalog-content\") pod \"f6ad35fc-a27e-498d-b30e-442d3633116b\" (UID: \"f6ad35fc-a27e-498d-b30e-442d3633116b\") " Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.561835 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf68s\" (UniqueName: \"kubernetes.io/projected/e8904235-7b22-4c3a-a1c6-e107f991b631-kube-api-access-vf68s\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.561846 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8904235-7b22-4c3a-a1c6-e107f991b631-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.562983 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ad35fc-a27e-498d-b30e-442d3633116b-utilities" (OuterVolumeSpecName: "utilities") pod "f6ad35fc-a27e-498d-b30e-442d3633116b" (UID: "f6ad35fc-a27e-498d-b30e-442d3633116b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.566405 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ad35fc-a27e-498d-b30e-442d3633116b-kube-api-access-qgw4k" (OuterVolumeSpecName: "kube-api-access-qgw4k") pod "f6ad35fc-a27e-498d-b30e-442d3633116b" (UID: "f6ad35fc-a27e-498d-b30e-442d3633116b"). InnerVolumeSpecName "kube-api-access-qgw4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.604445 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-679d56f588-9lmbh"] Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.628050 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ad35fc-a27e-498d-b30e-442d3633116b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6ad35fc-a27e-498d-b30e-442d3633116b" (UID: "f6ad35fc-a27e-498d-b30e-442d3633116b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.638642 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-t7p89"] Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.651885 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8904235-7b22-4c3a-a1c6-e107f991b631-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8904235-7b22-4c3a-a1c6-e107f991b631" (UID: "e8904235-7b22-4c3a-a1c6-e107f991b631"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.663451 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgw4k\" (UniqueName: \"kubernetes.io/projected/f6ad35fc-a27e-498d-b30e-442d3633116b-kube-api-access-qgw4k\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.663490 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ad35fc-a27e-498d-b30e-442d3633116b-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.663505 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ad35fc-a27e-498d-b30e-442d3633116b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.663516 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8904235-7b22-4c3a-a1c6-e107f991b631-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.967891 4805 generic.go:334] "Generic (PLEG): container finished" podID="fc243a86-5fa7-4260-8c90-92eaaac927fe" containerID="f96cf52f2bddb183de96d4aee280e2f6ecc5f68ce5ccd3aff62f91b78f96601f" exitCode=0 Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.967966 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz" event={"ID":"fc243a86-5fa7-4260-8c90-92eaaac927fe","Type":"ContainerDied","Data":"f96cf52f2bddb183de96d4aee280e2f6ecc5f68ce5ccd3aff62f91b78f96601f"} Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.968947 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-t7p89" event={"ID":"f5c66118-0eeb-41d4-9c90-21745892df11","Type":"ContainerStarted","Data":"95571ce8e16f557a765423a1cdb6f7a5a5ecb23f2fb9d51c3aa2c907e37cb47f"} Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.972029 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-df9vv" event={"ID":"e8904235-7b22-4c3a-a1c6-e107f991b631","Type":"ContainerDied","Data":"7414efd8f0089f2d553eadddbfbdab8731bd6ab6fd6beb29a39217653dd9d85e"} Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.972039 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-df9vv" Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.972067 4805 scope.go:117] "RemoveContainer" containerID="5edcb2b00b54c65b4f6d883ca791602c04a4314e49c59cf6180192708fe88918" Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.976136 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pr7vv" Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.982269 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pr7vv" event={"ID":"f6ad35fc-a27e-498d-b30e-442d3633116b","Type":"ContainerDied","Data":"f66ec7775c7b85bd17589d2d46ee129dcf28f8d5512dbe0c0d470e6c2ac3c9f8"} Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.989469 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-sg4sp" event={"ID":"283d9c09-ce9f-43b1-9849-49926b74fdb2","Type":"ContainerStarted","Data":"d1126099a8e1e669451599aa4e260f08ad4b9d554ea1535f50699322da010387"} Dec 03 00:20:31 crc kubenswrapper[4805]: I1203 00:20:31.990110 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-sg4sp" Dec 03 00:20:32 crc kubenswrapper[4805]: I1203 00:20:32.000499 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" event={"ID":"42d6da4d-d781-4243-b5c3-28a8cf91ef53","Type":"ContainerStarted","Data":"338a7cc0895913e63346fa19d3b54987d60ec7ea8c91dbfbecd7727b32876440"} Dec 03 00:20:32 crc kubenswrapper[4805]: I1203 00:20:32.010388 4805 scope.go:117] "RemoveContainer" containerID="f72ecbe2a840a680c3013c430507e7180046ac699d51270cff92085f288c7646" Dec 03 00:20:32 crc kubenswrapper[4805]: I1203 00:20:32.015694 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-679d56f588-9lmbh" event={"ID":"cd1d345d-e19c-499d-8f85-0f02ba3e9a4f","Type":"ContainerStarted","Data":"6fcb32e80d1aa6925d4a483769f555ec991dc8a70f16a2d0583628e7c252ef92"} Dec 03 00:20:32 crc kubenswrapper[4805]: E1203 00:20:32.022295 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385\\\"\"" pod="openshift-operators/perses-operator-5446b9c989-zb45p" podUID="c1a67691-4899-4efb-92fd-8e374caac92f" Dec 03 00:20:32 crc kubenswrapper[4805]: I1203 00:20:32.052090 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-sg4sp" podStartSLOduration=1.9276102229999998 podStartE2EDuration="24.052070752s" podCreationTimestamp="2025-12-03 00:20:08 +0000 UTC" firstStartedPulling="2025-12-03 00:20:09.17149661 +0000 UTC m=+833.020459226" lastFinishedPulling="2025-12-03 00:20:31.295957149 +0000 UTC m=+855.144919755" observedRunningTime="2025-12-03 00:20:32.046810029 +0000 UTC m=+855.895772635" watchObservedRunningTime="2025-12-03 00:20:32.052070752 +0000 UTC m=+855.901033348" Dec 03 00:20:32 crc kubenswrapper[4805]: I1203 00:20:32.096618 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-sg4sp" Dec 03 00:20:32 crc kubenswrapper[4805]: I1203 00:20:32.100231 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-df9vv"] Dec 03 00:20:32 crc kubenswrapper[4805]: I1203 00:20:32.109573 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-df9vv"] Dec 03 00:20:32 crc kubenswrapper[4805]: I1203 00:20:32.162332 4805 scope.go:117] "RemoveContainer" containerID="6a6baa9ccccd49116233d267fc5f2c4ba823a5531d3b6fb130554be79bcdd907" Dec 03 00:20:32 crc kubenswrapper[4805]: I1203 00:20:32.189885 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pr7vv"] Dec 03 00:20:32 crc kubenswrapper[4805]: I1203 00:20:32.200512 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pr7vv"] Dec 03 00:20:32 crc kubenswrapper[4805]: I1203 00:20:32.214537 4805 scope.go:117] "RemoveContainer" containerID="173a80514fc62426d6b0a3016a3b21f71ad5d4d37bd8d2b24811a83c483ef636" Dec 03 00:20:32 crc kubenswrapper[4805]: I1203 00:20:32.259803 4805 scope.go:117] "RemoveContainer" containerID="c8aaca849655d398d5f448264e72f9dd490d402cb92ee664f2721dd81799745c" Dec 03 00:20:32 crc kubenswrapper[4805]: I1203 00:20:32.338573 4805 scope.go:117] "RemoveContainer" containerID="eeb193cfacb40034f196e31fe37aad40914e8ab2955966c6ad2b3612d9e9156e" Dec 03 00:20:32 crc kubenswrapper[4805]: I1203 00:20:32.433689 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8904235-7b22-4c3a-a1c6-e107f991b631" path="/var/lib/kubelet/pods/e8904235-7b22-4c3a-a1c6-e107f991b631/volumes" Dec 03 00:20:32 crc kubenswrapper[4805]: I1203 00:20:32.434384 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ad35fc-a27e-498d-b30e-442d3633116b" path="/var/lib/kubelet/pods/f6ad35fc-a27e-498d-b30e-442d3633116b/volumes" Dec 03 00:20:33 crc kubenswrapper[4805]: I1203 00:20:33.028086 4805 generic.go:334] "Generic (PLEG): container finished" podID="fc243a86-5fa7-4260-8c90-92eaaac927fe" containerID="adcdf321910474951318c998fe06197df22a6a1b6413ebdd7406d0bd9c9ace27" exitCode=0 Dec 03 00:20:33 crc kubenswrapper[4805]: I1203 00:20:33.029358 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz" event={"ID":"fc243a86-5fa7-4260-8c90-92eaaac927fe","Type":"ContainerDied","Data":"adcdf321910474951318c998fe06197df22a6a1b6413ebdd7406d0bd9c9ace27"} Dec 03 00:20:35 crc kubenswrapper[4805]: I1203 00:20:35.012215 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz" Dec 03 00:20:35 crc kubenswrapper[4805]: I1203 00:20:35.065130 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz" event={"ID":"fc243a86-5fa7-4260-8c90-92eaaac927fe","Type":"ContainerDied","Data":"304aa83bb9541634187897a9f7cc2b88ef4f4224083dc141e00445ddb8809d7a"} Dec 03 00:20:35 crc kubenswrapper[4805]: I1203 00:20:35.065186 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="304aa83bb9541634187897a9f7cc2b88ef4f4224083dc141e00445ddb8809d7a" Dec 03 00:20:35 crc kubenswrapper[4805]: I1203 00:20:35.065265 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz" Dec 03 00:20:35 crc kubenswrapper[4805]: I1203 00:20:35.123299 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc243a86-5fa7-4260-8c90-92eaaac927fe-bundle\") pod \"fc243a86-5fa7-4260-8c90-92eaaac927fe\" (UID: \"fc243a86-5fa7-4260-8c90-92eaaac927fe\") " Dec 03 00:20:35 crc kubenswrapper[4805]: I1203 00:20:35.123404 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgqpr\" (UniqueName: \"kubernetes.io/projected/fc243a86-5fa7-4260-8c90-92eaaac927fe-kube-api-access-jgqpr\") pod \"fc243a86-5fa7-4260-8c90-92eaaac927fe\" (UID: \"fc243a86-5fa7-4260-8c90-92eaaac927fe\") " Dec 03 00:20:35 crc kubenswrapper[4805]: I1203 00:20:35.124762 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc243a86-5fa7-4260-8c90-92eaaac927fe-util\") pod \"fc243a86-5fa7-4260-8c90-92eaaac927fe\" (UID: \"fc243a86-5fa7-4260-8c90-92eaaac927fe\") " Dec 03 00:20:35 crc kubenswrapper[4805]: I1203 00:20:35.124858 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc243a86-5fa7-4260-8c90-92eaaac927fe-bundle" (OuterVolumeSpecName: "bundle") pod "fc243a86-5fa7-4260-8c90-92eaaac927fe" (UID: "fc243a86-5fa7-4260-8c90-92eaaac927fe"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:35 crc kubenswrapper[4805]: I1203 00:20:35.125067 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc243a86-5fa7-4260-8c90-92eaaac927fe-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:35 crc kubenswrapper[4805]: I1203 00:20:35.139681 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc243a86-5fa7-4260-8c90-92eaaac927fe-kube-api-access-jgqpr" (OuterVolumeSpecName: "kube-api-access-jgqpr") pod "fc243a86-5fa7-4260-8c90-92eaaac927fe" (UID: "fc243a86-5fa7-4260-8c90-92eaaac927fe"). InnerVolumeSpecName "kube-api-access-jgqpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:20:35 crc kubenswrapper[4805]: I1203 00:20:35.141487 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc243a86-5fa7-4260-8c90-92eaaac927fe-util" (OuterVolumeSpecName: "util") pod "fc243a86-5fa7-4260-8c90-92eaaac927fe" (UID: "fc243a86-5fa7-4260-8c90-92eaaac927fe"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:20:35 crc kubenswrapper[4805]: I1203 00:20:35.226767 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgqpr\" (UniqueName: \"kubernetes.io/projected/fc243a86-5fa7-4260-8c90-92eaaac927fe-kube-api-access-jgqpr\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:35 crc kubenswrapper[4805]: I1203 00:20:35.226821 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc243a86-5fa7-4260-8c90-92eaaac927fe-util\") on node \"crc\" DevicePath \"\"" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.087275 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-679d56f588-9lmbh" event={"ID":"cd1d345d-e19c-499d-8f85-0f02ba3e9a4f","Type":"ContainerStarted","Data":"968a078c69df75169e6a344269daf82a3712ccf2bde16d306f322b2d864929f1"} Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.110284 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-679d56f588-9lmbh" podStartSLOduration=17.804050829 podStartE2EDuration="21.11025271s" podCreationTimestamp="2025-12-03 00:20:15 +0000 UTC" firstStartedPulling="2025-12-03 00:20:31.621305005 +0000 UTC m=+855.470267611" lastFinishedPulling="2025-12-03 00:20:34.927506896 +0000 UTC m=+858.776469492" observedRunningTime="2025-12-03 00:20:36.109364488 +0000 UTC m=+859.958327144" watchObservedRunningTime="2025-12-03 00:20:36.11025271 +0000 UTC m=+859.959215326" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.257619 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 03 00:20:36 crc kubenswrapper[4805]: E1203 00:20:36.257917 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ad35fc-a27e-498d-b30e-442d3633116b" containerName="extract-content" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.257937 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ad35fc-a27e-498d-b30e-442d3633116b" containerName="extract-content" Dec 03 00:20:36 crc kubenswrapper[4805]: E1203 00:20:36.257951 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8904235-7b22-4c3a-a1c6-e107f991b631" containerName="extract-content" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.257959 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8904235-7b22-4c3a-a1c6-e107f991b631" containerName="extract-content" Dec 03 00:20:36 crc kubenswrapper[4805]: E1203 00:20:36.257970 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ad35fc-a27e-498d-b30e-442d3633116b" containerName="extract-utilities" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.257978 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ad35fc-a27e-498d-b30e-442d3633116b" containerName="extract-utilities" Dec 03 00:20:36 crc kubenswrapper[4805]: E1203 00:20:36.258008 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc243a86-5fa7-4260-8c90-92eaaac927fe" containerName="pull" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.258016 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc243a86-5fa7-4260-8c90-92eaaac927fe" containerName="pull" Dec 03 00:20:36 crc kubenswrapper[4805]: E1203 00:20:36.258025 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc243a86-5fa7-4260-8c90-92eaaac927fe" containerName="util" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.258033 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc243a86-5fa7-4260-8c90-92eaaac927fe" containerName="util" Dec 03 00:20:36 crc kubenswrapper[4805]: E1203 00:20:36.258041 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc243a86-5fa7-4260-8c90-92eaaac927fe" containerName="extract" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.258049 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc243a86-5fa7-4260-8c90-92eaaac927fe" containerName="extract" Dec 03 00:20:36 crc kubenswrapper[4805]: E1203 00:20:36.258071 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8904235-7b22-4c3a-a1c6-e107f991b631" containerName="registry-server" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.258081 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8904235-7b22-4c3a-a1c6-e107f991b631" containerName="registry-server" Dec 03 00:20:36 crc kubenswrapper[4805]: E1203 00:20:36.258089 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ad35fc-a27e-498d-b30e-442d3633116b" containerName="registry-server" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.258096 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ad35fc-a27e-498d-b30e-442d3633116b" containerName="registry-server" Dec 03 00:20:36 crc kubenswrapper[4805]: E1203 00:20:36.258105 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8904235-7b22-4c3a-a1c6-e107f991b631" containerName="extract-utilities" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.258112 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8904235-7b22-4c3a-a1c6-e107f991b631" containerName="extract-utilities" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.258267 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8904235-7b22-4c3a-a1c6-e107f991b631" containerName="registry-server" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.258294 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ad35fc-a27e-498d-b30e-442d3633116b" containerName="registry-server" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.258305 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc243a86-5fa7-4260-8c90-92eaaac927fe" containerName="extract" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.259286 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.262146 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.265941 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-8lg2j" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.266748 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.266868 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.266899 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.266940 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.267302 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.268610 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.268884 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.281612 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.449279 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.449327 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.449353 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.449381 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.449952 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/080988eb-4bd7-49e9-8689-6f551aa99555-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.449981 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/080988eb-4bd7-49e9-8689-6f551aa99555-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.450006 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/080988eb-4bd7-49e9-8689-6f551aa99555-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.450311 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/080988eb-4bd7-49e9-8689-6f551aa99555-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.450389 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.450505 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.450563 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.450587 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.450620 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.450639 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.450771 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.552456 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.552557 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.552616 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.552649 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.552708 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.552731 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.552765 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.552832 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.552855 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.552884 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.552914 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.552933 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/080988eb-4bd7-49e9-8689-6f551aa99555-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.552954 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/080988eb-4bd7-49e9-8689-6f551aa99555-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.553002 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/080988eb-4bd7-49e9-8689-6f551aa99555-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.553035 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/080988eb-4bd7-49e9-8689-6f551aa99555-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.553662 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/080988eb-4bd7-49e9-8689-6f551aa99555-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.553780 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.553858 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.554257 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/080988eb-4bd7-49e9-8689-6f551aa99555-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.554772 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/080988eb-4bd7-49e9-8689-6f551aa99555-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.554919 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.555042 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.555229 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.559735 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/080988eb-4bd7-49e9-8689-6f551aa99555-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.559900 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.560346 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.560574 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.560844 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.561407 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.561497 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/080988eb-4bd7-49e9-8689-6f551aa99555-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"080988eb-4bd7-49e9-8689-6f551aa99555\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:36 crc kubenswrapper[4805]: I1203 00:20:36.576423 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:20:39 crc kubenswrapper[4805]: I1203 00:20:39.428833 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 03 00:20:39 crc kubenswrapper[4805]: W1203 00:20:39.434408 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod080988eb_4bd7_49e9_8689_6f551aa99555.slice/crio-d7a866339225c5a6912200d374382cf72c088723b1e93fda897320f07c407f17 WatchSource:0}: Error finding container d7a866339225c5a6912200d374382cf72c088723b1e93fda897320f07c407f17: Status 404 returned error can't find the container with id d7a866339225c5a6912200d374382cf72c088723b1e93fda897320f07c407f17 Dec 03 00:20:40 crc kubenswrapper[4805]: I1203 00:20:40.112851 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"080988eb-4bd7-49e9-8689-6f551aa99555","Type":"ContainerStarted","Data":"d7a866339225c5a6912200d374382cf72c088723b1e93fda897320f07c407f17"} Dec 03 00:20:40 crc kubenswrapper[4805]: I1203 00:20:40.114362 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-t7p89" event={"ID":"f5c66118-0eeb-41d4-9c90-21745892df11","Type":"ContainerStarted","Data":"c74d5968a12a81667c084df11c721054940d62bc952ad569278240c0f4d9c072"} Dec 03 00:20:40 crc kubenswrapper[4805]: I1203 00:20:40.140585 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-t7p89" podStartSLOduration=16.481407001 podStartE2EDuration="24.140533929s" podCreationTimestamp="2025-12-03 00:20:16 +0000 UTC" firstStartedPulling="2025-12-03 00:20:31.64259134 +0000 UTC m=+855.491553946" lastFinishedPulling="2025-12-03 00:20:39.301718268 +0000 UTC m=+863.150680874" observedRunningTime="2025-12-03 00:20:40.134000415 +0000 UTC m=+863.982963051" watchObservedRunningTime="2025-12-03 00:20:40.140533929 +0000 UTC m=+863.989496535" Dec 03 00:20:46 crc kubenswrapper[4805]: I1203 00:20:46.159110 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl" event={"ID":"c8d0d1b6-817f-4d74-8373-1e186de34888","Type":"ContainerStarted","Data":"1d62fd21c78d3a014bc7aff87197783e0900cb47714b31b1a6223ccc96f382d8"} Dec 03 00:20:46 crc kubenswrapper[4805]: I1203 00:20:46.164612 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq" event={"ID":"dd60859c-d506-402e-90b3-22e44a9cde9a","Type":"ContainerStarted","Data":"8beb98b82eedb18481eb451e44761dda545b6e7bae9849305c85c918f472cd6f"} Dec 03 00:20:46 crc kubenswrapper[4805]: I1203 00:20:46.195176 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl" podStartSLOduration=2.043806633 podStartE2EDuration="38.195155373s" podCreationTimestamp="2025-12-03 00:20:08 +0000 UTC" firstStartedPulling="2025-12-03 00:20:09.012832153 +0000 UTC m=+832.861794759" lastFinishedPulling="2025-12-03 00:20:45.164180893 +0000 UTC m=+869.013143499" observedRunningTime="2025-12-03 00:20:46.191876631 +0000 UTC m=+870.040839237" watchObservedRunningTime="2025-12-03 00:20:46.195155373 +0000 UTC m=+870.044117979" Dec 03 00:20:46 crc kubenswrapper[4805]: I1203 00:20:46.239457 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq" podStartSLOduration=-9223371998.615349 podStartE2EDuration="38.239427736s" podCreationTimestamp="2025-12-03 00:20:08 +0000 UTC" firstStartedPulling="2025-12-03 00:20:08.910884551 +0000 UTC m=+832.759847157" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:20:46.230973354 +0000 UTC m=+870.079935960" watchObservedRunningTime="2025-12-03 00:20:46.239427736 +0000 UTC m=+870.088390342" Dec 03 00:20:47 crc kubenswrapper[4805]: I1203 00:20:47.182578 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-n4dq8" event={"ID":"164269e7-9f89-47f8-b363-bcb620782a98","Type":"ContainerStarted","Data":"5886fd215fbfd6977098cf92c06f592ca0097a2f2287483b404e906042bb779c"} Dec 03 00:20:47 crc kubenswrapper[4805]: I1203 00:20:47.207019 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-n4dq8" podStartSLOduration=2.748089242 podStartE2EDuration="40.207000283s" podCreationTimestamp="2025-12-03 00:20:07 +0000 UTC" firstStartedPulling="2025-12-03 00:20:09.051483884 +0000 UTC m=+832.900446490" lastFinishedPulling="2025-12-03 00:20:46.510394925 +0000 UTC m=+870.359357531" observedRunningTime="2025-12-03 00:20:47.20096036 +0000 UTC m=+871.049922966" watchObservedRunningTime="2025-12-03 00:20:47.207000283 +0000 UTC m=+871.055962889" Dec 03 00:20:48 crc kubenswrapper[4805]: I1203 00:20:48.508678 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-n7mq8"] Dec 03 00:20:48 crc kubenswrapper[4805]: I1203 00:20:48.509747 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-n7mq8" Dec 03 00:20:48 crc kubenswrapper[4805]: I1203 00:20:48.512847 4805 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-bzh5r" Dec 03 00:20:48 crc kubenswrapper[4805]: I1203 00:20:48.513892 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Dec 03 00:20:48 crc kubenswrapper[4805]: I1203 00:20:48.519528 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Dec 03 00:20:48 crc kubenswrapper[4805]: I1203 00:20:48.531019 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-n7mq8"] Dec 03 00:20:48 crc kubenswrapper[4805]: I1203 00:20:48.575048 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1c4c744c-d6a1-43ef-ac85-6b9aa2f807ec-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-n7mq8\" (UID: \"1c4c744c-d6a1-43ef-ac85-6b9aa2f807ec\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-n7mq8" Dec 03 00:20:48 crc kubenswrapper[4805]: I1203 00:20:48.575115 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4df6\" (UniqueName: \"kubernetes.io/projected/1c4c744c-d6a1-43ef-ac85-6b9aa2f807ec-kube-api-access-r4df6\") pod \"cert-manager-operator-controller-manager-5446d6888b-n7mq8\" (UID: \"1c4c744c-d6a1-43ef-ac85-6b9aa2f807ec\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-n7mq8" Dec 03 00:20:48 crc kubenswrapper[4805]: I1203 00:20:48.675828 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1c4c744c-d6a1-43ef-ac85-6b9aa2f807ec-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-n7mq8\" (UID: \"1c4c744c-d6a1-43ef-ac85-6b9aa2f807ec\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-n7mq8" Dec 03 00:20:48 crc kubenswrapper[4805]: I1203 00:20:48.675876 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4df6\" (UniqueName: \"kubernetes.io/projected/1c4c744c-d6a1-43ef-ac85-6b9aa2f807ec-kube-api-access-r4df6\") pod \"cert-manager-operator-controller-manager-5446d6888b-n7mq8\" (UID: \"1c4c744c-d6a1-43ef-ac85-6b9aa2f807ec\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-n7mq8" Dec 03 00:20:48 crc kubenswrapper[4805]: I1203 00:20:48.676644 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1c4c744c-d6a1-43ef-ac85-6b9aa2f807ec-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-n7mq8\" (UID: \"1c4c744c-d6a1-43ef-ac85-6b9aa2f807ec\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-n7mq8" Dec 03 00:20:48 crc kubenswrapper[4805]: I1203 00:20:48.714087 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4df6\" (UniqueName: \"kubernetes.io/projected/1c4c744c-d6a1-43ef-ac85-6b9aa2f807ec-kube-api-access-r4df6\") pod \"cert-manager-operator-controller-manager-5446d6888b-n7mq8\" (UID: \"1c4c744c-d6a1-43ef-ac85-6b9aa2f807ec\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-n7mq8" Dec 03 00:20:48 crc kubenswrapper[4805]: I1203 00:20:48.846851 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-n7mq8" Dec 03 00:20:55 crc kubenswrapper[4805]: I1203 00:20:55.603772 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-n7mq8"] Dec 03 00:20:55 crc kubenswrapper[4805]: W1203 00:20:55.625688 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c4c744c_d6a1_43ef_ac85_6b9aa2f807ec.slice/crio-643ef5034fcbd82481091a59d67a9382a67342138376fbbbf331405de22d3831 WatchSource:0}: Error finding container 643ef5034fcbd82481091a59d67a9382a67342138376fbbbf331405de22d3831: Status 404 returned error can't find the container with id 643ef5034fcbd82481091a59d67a9382a67342138376fbbbf331405de22d3831 Dec 03 00:20:56 crc kubenswrapper[4805]: I1203 00:20:56.271897 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-zb45p" event={"ID":"c1a67691-4899-4efb-92fd-8e374caac92f","Type":"ContainerStarted","Data":"5144cf4120dd4d11ccd98c185d4f2985985774ead61091540d8dca70c77fafa6"} Dec 03 00:20:56 crc kubenswrapper[4805]: I1203 00:20:56.272233 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-zb45p" Dec 03 00:20:56 crc kubenswrapper[4805]: I1203 00:20:56.273298 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-n7mq8" event={"ID":"1c4c744c-d6a1-43ef-ac85-6b9aa2f807ec","Type":"ContainerStarted","Data":"643ef5034fcbd82481091a59d67a9382a67342138376fbbbf331405de22d3831"} Dec 03 00:20:56 crc kubenswrapper[4805]: I1203 00:20:56.274616 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"080988eb-4bd7-49e9-8689-6f551aa99555","Type":"ContainerStarted","Data":"43890c6dd6cdb7f70b315bc0ff7a6c4e720fe61e44bc3781720d9ed1a2d14bb4"} Dec 03 00:20:56 crc kubenswrapper[4805]: I1203 00:20:56.307357 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-zb45p" podStartSLOduration=2.255554624 podStartE2EDuration="48.307337881s" podCreationTimestamp="2025-12-03 00:20:08 +0000 UTC" firstStartedPulling="2025-12-03 00:20:09.310323539 +0000 UTC m=+833.159286145" lastFinishedPulling="2025-12-03 00:20:55.362106796 +0000 UTC m=+879.211069402" observedRunningTime="2025-12-03 00:20:56.299041932 +0000 UTC m=+880.148004548" watchObservedRunningTime="2025-12-03 00:20:56.307337881 +0000 UTC m=+880.156300487" Dec 03 00:20:56 crc kubenswrapper[4805]: I1203 00:20:56.566667 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 03 00:20:56 crc kubenswrapper[4805]: I1203 00:20:56.615042 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 03 00:20:59 crc kubenswrapper[4805]: I1203 00:20:59.295406 4805 generic.go:334] "Generic (PLEG): container finished" podID="080988eb-4bd7-49e9-8689-6f551aa99555" containerID="43890c6dd6cdb7f70b315bc0ff7a6c4e720fe61e44bc3781720d9ed1a2d14bb4" exitCode=0 Dec 03 00:20:59 crc kubenswrapper[4805]: I1203 00:20:59.295499 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"080988eb-4bd7-49e9-8689-6f551aa99555","Type":"ContainerDied","Data":"43890c6dd6cdb7f70b315bc0ff7a6c4e720fe61e44bc3781720d9ed1a2d14bb4"} Dec 03 00:21:05 crc kubenswrapper[4805]: I1203 00:21:05.332657 4805 generic.go:334] "Generic (PLEG): container finished" podID="080988eb-4bd7-49e9-8689-6f551aa99555" containerID="bb5f0b49aaf1dabe27612cf2d2ba123f3a757cb3759f388c905bb48012404ac7" exitCode=0 Dec 03 00:21:05 crc kubenswrapper[4805]: I1203 00:21:05.332745 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"080988eb-4bd7-49e9-8689-6f551aa99555","Type":"ContainerDied","Data":"bb5f0b49aaf1dabe27612cf2d2ba123f3a757cb3759f388c905bb48012404ac7"} Dec 03 00:21:06 crc kubenswrapper[4805]: I1203 00:21:06.341819 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"080988eb-4bd7-49e9-8689-6f551aa99555","Type":"ContainerStarted","Data":"a1241c9f31540da874d92d1f9649b956da84f4c3b8dffd9ad36ea412111c140e"} Dec 03 00:21:06 crc kubenswrapper[4805]: I1203 00:21:06.342369 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.563605 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=15.482374191 podStartE2EDuration="31.563576132s" podCreationTimestamp="2025-12-03 00:20:36 +0000 UTC" firstStartedPulling="2025-12-03 00:20:39.437854069 +0000 UTC m=+863.286816665" lastFinishedPulling="2025-12-03 00:20:55.519056 +0000 UTC m=+879.368018606" observedRunningTime="2025-12-03 00:21:06.37260012 +0000 UTC m=+890.221562726" watchObservedRunningTime="2025-12-03 00:21:07.563576132 +0000 UTC m=+891.412538738" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.571131 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.572391 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.575280 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-global-ca" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.575307 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-ca" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.577474 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-fwd7j" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.580860 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-sys-config" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.585033 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.667251 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.667328 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.667394 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.667422 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-builder-dockercfg-fwd7j-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.667443 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlnfk\" (UniqueName: \"kubernetes.io/projected/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-kube-api-access-zlnfk\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.667460 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.667506 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.667566 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.667590 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-builder-dockercfg-fwd7j-push\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.667612 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.667631 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.667647 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.769224 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlnfk\" (UniqueName: \"kubernetes.io/projected/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-kube-api-access-zlnfk\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.769316 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.769348 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.769447 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.769472 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.769580 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-builder-dockercfg-fwd7j-push\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.769674 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.769710 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.769733 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.769791 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.769849 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.769912 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.769940 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-builder-dockercfg-fwd7j-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.770188 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.770329 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.770460 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.770614 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.770854 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.771029 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.771038 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.771505 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.776298 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-builder-dockercfg-fwd7j-push\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.787942 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-builder-dockercfg-fwd7j-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:07 crc kubenswrapper[4805]: I1203 00:21:07.788142 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlnfk\" (UniqueName: \"kubernetes.io/projected/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-kube-api-access-zlnfk\") pod \"service-telemetry-operator-1-build\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:08 crc kubenswrapper[4805]: I1203 00:21:08.013374 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:08 crc kubenswrapper[4805]: I1203 00:21:08.355454 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-n7mq8" event={"ID":"1c4c744c-d6a1-43ef-ac85-6b9aa2f807ec","Type":"ContainerStarted","Data":"5666f1f2238972028029fa74a49ced00666b712559c8b63ca67878e952674ea7"} Dec 03 00:21:08 crc kubenswrapper[4805]: I1203 00:21:08.380343 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-n7mq8" podStartSLOduration=8.551268772 podStartE2EDuration="20.380319488s" podCreationTimestamp="2025-12-03 00:20:48 +0000 UTC" firstStartedPulling="2025-12-03 00:20:55.628709586 +0000 UTC m=+879.477672192" lastFinishedPulling="2025-12-03 00:21:07.457760302 +0000 UTC m=+891.306722908" observedRunningTime="2025-12-03 00:21:08.373962938 +0000 UTC m=+892.222925544" watchObservedRunningTime="2025-12-03 00:21:08.380319488 +0000 UTC m=+892.229282104" Dec 03 00:21:08 crc kubenswrapper[4805]: I1203 00:21:08.509186 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 03 00:21:08 crc kubenswrapper[4805]: I1203 00:21:08.820172 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-zb45p" Dec 03 00:21:09 crc kubenswrapper[4805]: I1203 00:21:09.363702 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"d3e3ec7b-4e1c-433d-8213-8a206a48bc99","Type":"ContainerStarted","Data":"5eaa9e7107c1928e35c967bbc15ffb551af2f132c4b6233345efd5b686480240"} Dec 03 00:21:10 crc kubenswrapper[4805]: I1203 00:21:10.734293 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-bw2pp"] Dec 03 00:21:10 crc kubenswrapper[4805]: I1203 00:21:10.735064 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-bw2pp" Dec 03 00:21:10 crc kubenswrapper[4805]: I1203 00:21:10.737247 4805 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-95vmd" Dec 03 00:21:10 crc kubenswrapper[4805]: I1203 00:21:10.737256 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 03 00:21:10 crc kubenswrapper[4805]: I1203 00:21:10.739359 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 03 00:21:10 crc kubenswrapper[4805]: I1203 00:21:10.751378 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-bw2pp"] Dec 03 00:21:10 crc kubenswrapper[4805]: I1203 00:21:10.812422 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e4899534-51b2-4ed8-81e4-ea8a7c0f55ac-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-bw2pp\" (UID: \"e4899534-51b2-4ed8-81e4-ea8a7c0f55ac\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bw2pp" Dec 03 00:21:10 crc kubenswrapper[4805]: I1203 00:21:10.812718 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w7sh\" (UniqueName: \"kubernetes.io/projected/e4899534-51b2-4ed8-81e4-ea8a7c0f55ac-kube-api-access-9w7sh\") pod \"cert-manager-webhook-f4fb5df64-bw2pp\" (UID: \"e4899534-51b2-4ed8-81e4-ea8a7c0f55ac\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bw2pp" Dec 03 00:21:10 crc kubenswrapper[4805]: I1203 00:21:10.914104 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w7sh\" (UniqueName: \"kubernetes.io/projected/e4899534-51b2-4ed8-81e4-ea8a7c0f55ac-kube-api-access-9w7sh\") pod \"cert-manager-webhook-f4fb5df64-bw2pp\" (UID: \"e4899534-51b2-4ed8-81e4-ea8a7c0f55ac\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bw2pp" Dec 03 00:21:10 crc kubenswrapper[4805]: I1203 00:21:10.914170 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e4899534-51b2-4ed8-81e4-ea8a7c0f55ac-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-bw2pp\" (UID: \"e4899534-51b2-4ed8-81e4-ea8a7c0f55ac\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bw2pp" Dec 03 00:21:10 crc kubenswrapper[4805]: I1203 00:21:10.941626 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e4899534-51b2-4ed8-81e4-ea8a7c0f55ac-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-bw2pp\" (UID: \"e4899534-51b2-4ed8-81e4-ea8a7c0f55ac\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bw2pp" Dec 03 00:21:10 crc kubenswrapper[4805]: I1203 00:21:10.944810 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w7sh\" (UniqueName: \"kubernetes.io/projected/e4899534-51b2-4ed8-81e4-ea8a7c0f55ac-kube-api-access-9w7sh\") pod \"cert-manager-webhook-f4fb5df64-bw2pp\" (UID: \"e4899534-51b2-4ed8-81e4-ea8a7c0f55ac\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bw2pp" Dec 03 00:21:11 crc kubenswrapper[4805]: I1203 00:21:11.057183 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-bw2pp" Dec 03 00:21:11 crc kubenswrapper[4805]: I1203 00:21:11.460627 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-bw2pp"] Dec 03 00:21:12 crc kubenswrapper[4805]: I1203 00:21:12.399911 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-bw2pp" event={"ID":"e4899534-51b2-4ed8-81e4-ea8a7c0f55ac","Type":"ContainerStarted","Data":"8617bed56dda400779d12e0500449d9f929cc23fdfad906b7e7ffb0dd8318a60"} Dec 03 00:21:16 crc kubenswrapper[4805]: I1203 00:21:16.564921 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-2w8gg"] Dec 03 00:21:16 crc kubenswrapper[4805]: I1203 00:21:16.566031 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2w8gg" Dec 03 00:21:16 crc kubenswrapper[4805]: I1203 00:21:16.575882 4805 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-8gsxm" Dec 03 00:21:16 crc kubenswrapper[4805]: I1203 00:21:16.612408 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-2w8gg"] Dec 03 00:21:16 crc kubenswrapper[4805]: I1203 00:21:16.624132 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/76757abd-1789-4cfa-b495-18540305dd10-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-2w8gg\" (UID: \"76757abd-1789-4cfa-b495-18540305dd10\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2w8gg" Dec 03 00:21:16 crc kubenswrapper[4805]: I1203 00:21:16.624222 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c77kg\" (UniqueName: \"kubernetes.io/projected/76757abd-1789-4cfa-b495-18540305dd10-kube-api-access-c77kg\") pod \"cert-manager-cainjector-855d9ccff4-2w8gg\" (UID: \"76757abd-1789-4cfa-b495-18540305dd10\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2w8gg" Dec 03 00:21:16 crc kubenswrapper[4805]: I1203 00:21:16.692913 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="080988eb-4bd7-49e9-8689-6f551aa99555" containerName="elasticsearch" probeResult="failure" output=< Dec 03 00:21:16 crc kubenswrapper[4805]: {"timestamp": "2025-12-03T00:21:16+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 03 00:21:16 crc kubenswrapper[4805]: > Dec 03 00:21:16 crc kubenswrapper[4805]: I1203 00:21:16.725962 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/76757abd-1789-4cfa-b495-18540305dd10-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-2w8gg\" (UID: \"76757abd-1789-4cfa-b495-18540305dd10\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2w8gg" Dec 03 00:21:16 crc kubenswrapper[4805]: I1203 00:21:16.726019 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c77kg\" (UniqueName: \"kubernetes.io/projected/76757abd-1789-4cfa-b495-18540305dd10-kube-api-access-c77kg\") pod \"cert-manager-cainjector-855d9ccff4-2w8gg\" (UID: \"76757abd-1789-4cfa-b495-18540305dd10\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2w8gg" Dec 03 00:21:16 crc kubenswrapper[4805]: I1203 00:21:16.751033 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/76757abd-1789-4cfa-b495-18540305dd10-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-2w8gg\" (UID: \"76757abd-1789-4cfa-b495-18540305dd10\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2w8gg" Dec 03 00:21:16 crc kubenswrapper[4805]: I1203 00:21:16.752424 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c77kg\" (UniqueName: \"kubernetes.io/projected/76757abd-1789-4cfa-b495-18540305dd10-kube-api-access-c77kg\") pod \"cert-manager-cainjector-855d9ccff4-2w8gg\" (UID: \"76757abd-1789-4cfa-b495-18540305dd10\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2w8gg" Dec 03 00:21:16 crc kubenswrapper[4805]: I1203 00:21:16.914621 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2w8gg" Dec 03 00:21:17 crc kubenswrapper[4805]: I1203 00:21:17.941435 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.652106 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.654450 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.656132 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-ca" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.656132 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-sys-config" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.657141 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-global-ca" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.701114 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.772579 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.772660 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-builder-dockercfg-fwd7j-push\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.772717 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.772736 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px4pm\" (UniqueName: \"kubernetes.io/projected/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-kube-api-access-px4pm\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.772826 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.772848 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.772869 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.772898 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.772923 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-builder-dockercfg-fwd7j-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.772971 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.772996 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.773018 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.874278 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.874338 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-builder-dockercfg-fwd7j-push\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.874364 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.874385 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px4pm\" (UniqueName: \"kubernetes.io/projected/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-kube-api-access-px4pm\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.874437 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.874456 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.874476 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.874497 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.874526 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-builder-dockercfg-fwd7j-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.874557 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.874579 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.874597 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.875034 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.875034 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.875330 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.875417 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.875423 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.875538 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.875610 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.875701 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.879834 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-builder-dockercfg-fwd7j-push\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.880625 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.884615 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-builder-dockercfg-fwd7j-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.942679 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px4pm\" (UniqueName: \"kubernetes.io/projected/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-kube-api-access-px4pm\") pod \"service-telemetry-operator-2-build\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.966905 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-nplwz"] Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.967888 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-nplwz" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.969879 4805 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-vjh7z" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.975334 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:21:19 crc kubenswrapper[4805]: I1203 00:21:19.978587 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-nplwz"] Dec 03 00:21:20 crc kubenswrapper[4805]: I1203 00:21:20.076850 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/62abc6ca-92f7-46ac-b22d-333dfa327640-bound-sa-token\") pod \"cert-manager-86cb77c54b-nplwz\" (UID: \"62abc6ca-92f7-46ac-b22d-333dfa327640\") " pod="cert-manager/cert-manager-86cb77c54b-nplwz" Dec 03 00:21:20 crc kubenswrapper[4805]: I1203 00:21:20.076967 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nhqt\" (UniqueName: \"kubernetes.io/projected/62abc6ca-92f7-46ac-b22d-333dfa327640-kube-api-access-6nhqt\") pod \"cert-manager-86cb77c54b-nplwz\" (UID: \"62abc6ca-92f7-46ac-b22d-333dfa327640\") " pod="cert-manager/cert-manager-86cb77c54b-nplwz" Dec 03 00:21:20 crc kubenswrapper[4805]: I1203 00:21:20.178894 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/62abc6ca-92f7-46ac-b22d-333dfa327640-bound-sa-token\") pod \"cert-manager-86cb77c54b-nplwz\" (UID: \"62abc6ca-92f7-46ac-b22d-333dfa327640\") " pod="cert-manager/cert-manager-86cb77c54b-nplwz" Dec 03 00:21:20 crc kubenswrapper[4805]: I1203 00:21:20.179401 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nhqt\" (UniqueName: \"kubernetes.io/projected/62abc6ca-92f7-46ac-b22d-333dfa327640-kube-api-access-6nhqt\") pod \"cert-manager-86cb77c54b-nplwz\" (UID: \"62abc6ca-92f7-46ac-b22d-333dfa327640\") " pod="cert-manager/cert-manager-86cb77c54b-nplwz" Dec 03 00:21:20 crc kubenswrapper[4805]: I1203 00:21:20.197600 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nhqt\" (UniqueName: \"kubernetes.io/projected/62abc6ca-92f7-46ac-b22d-333dfa327640-kube-api-access-6nhqt\") pod \"cert-manager-86cb77c54b-nplwz\" (UID: \"62abc6ca-92f7-46ac-b22d-333dfa327640\") " pod="cert-manager/cert-manager-86cb77c54b-nplwz" Dec 03 00:21:20 crc kubenswrapper[4805]: I1203 00:21:20.208980 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/62abc6ca-92f7-46ac-b22d-333dfa327640-bound-sa-token\") pod \"cert-manager-86cb77c54b-nplwz\" (UID: \"62abc6ca-92f7-46ac-b22d-333dfa327640\") " pod="cert-manager/cert-manager-86cb77c54b-nplwz" Dec 03 00:21:20 crc kubenswrapper[4805]: I1203 00:21:20.288496 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-nplwz" Dec 03 00:21:21 crc kubenswrapper[4805]: I1203 00:21:21.682756 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="080988eb-4bd7-49e9-8689-6f551aa99555" containerName="elasticsearch" probeResult="failure" output=< Dec 03 00:21:21 crc kubenswrapper[4805]: {"timestamp": "2025-12-03T00:21:21+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 03 00:21:21 crc kubenswrapper[4805]: > Dec 03 00:21:26 crc kubenswrapper[4805]: I1203 00:21:26.683474 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="080988eb-4bd7-49e9-8689-6f551aa99555" containerName="elasticsearch" probeResult="failure" output=< Dec 03 00:21:26 crc kubenswrapper[4805]: {"timestamp": "2025-12-03T00:21:26+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 03 00:21:26 crc kubenswrapper[4805]: > Dec 03 00:21:28 crc kubenswrapper[4805]: E1203 00:21:28.684469 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" Dec 03 00:21:28 crc kubenswrapper[4805]: E1203 00:21:28.684971 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-webhook,Image:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,Command:[/app/cmd/webhook/webhook],Args:[--dynamic-serving-ca-secret-name=cert-manager-webhook-ca --dynamic-serving-ca-secret-namespace=$(POD_NAMESPACE) --dynamic-serving-dns-names=cert-manager-webhook,cert-manager-webhook.$(POD_NAMESPACE),cert-manager-webhook.$(POD_NAMESPACE).svc --secure-port=10250 --v=2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:10250,Protocol:TCP,HostIP:,},ContainerPort{Name:healthcheck,HostPort:0,ContainerPort:6080,Protocol:TCP,HostIP:,},ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9w7sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-webhook-f4fb5df64-bw2pp_cert-manager(e4899534-51b2-4ed8-81e4-ea8a7c0f55ac): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:21:28 crc kubenswrapper[4805]: E1203 00:21:28.688157 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-webhook-f4fb5df64-bw2pp" podUID="e4899534-51b2-4ed8-81e4-ea8a7c0f55ac" Dec 03 00:21:28 crc kubenswrapper[4805]: I1203 00:21:28.946735 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 03 00:21:28 crc kubenswrapper[4805]: W1203 00:21:28.958650 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ed48b3b_6ef2_4c7b_a245_7b6ec3fa147d.slice/crio-13cb98b2bd5ae7a93c833903fb9e393ec9b51c0fec71095f1c0d7f85aac94e9e WatchSource:0}: Error finding container 13cb98b2bd5ae7a93c833903fb9e393ec9b51c0fec71095f1c0d7f85aac94e9e: Status 404 returned error can't find the container with id 13cb98b2bd5ae7a93c833903fb9e393ec9b51c0fec71095f1c0d7f85aac94e9e Dec 03 00:21:29 crc kubenswrapper[4805]: I1203 00:21:29.091120 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-2w8gg"] Dec 03 00:21:29 crc kubenswrapper[4805]: W1203 00:21:29.107007 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76757abd_1789_4cfa_b495_18540305dd10.slice/crio-eb57ded99d3850c2021c6ea79cea113f11c594e6f18a152169f4e5a7f4951548 WatchSource:0}: Error finding container eb57ded99d3850c2021c6ea79cea113f11c594e6f18a152169f4e5a7f4951548: Status 404 returned error can't find the container with id eb57ded99d3850c2021c6ea79cea113f11c594e6f18a152169f4e5a7f4951548 Dec 03 00:21:29 crc kubenswrapper[4805]: I1203 00:21:29.148890 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-nplwz"] Dec 03 00:21:29 crc kubenswrapper[4805]: I1203 00:21:29.649564 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-nplwz" event={"ID":"62abc6ca-92f7-46ac-b22d-333dfa327640","Type":"ContainerStarted","Data":"e124ac5aa4b9b8cf9ba0b889913f06db181167931ea950754e7969aba4ff1b14"} Dec 03 00:21:29 crc kubenswrapper[4805]: I1203 00:21:29.651683 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d","Type":"ContainerStarted","Data":"140611b8d0188f4e88ec4fd1707e6345efead5b7fe08fbf2533d2720c2f612f5"} Dec 03 00:21:29 crc kubenswrapper[4805]: I1203 00:21:29.651721 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d","Type":"ContainerStarted","Data":"13cb98b2bd5ae7a93c833903fb9e393ec9b51c0fec71095f1c0d7f85aac94e9e"} Dec 03 00:21:29 crc kubenswrapper[4805]: I1203 00:21:29.653434 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2w8gg" event={"ID":"76757abd-1789-4cfa-b495-18540305dd10","Type":"ContainerStarted","Data":"eb57ded99d3850c2021c6ea79cea113f11c594e6f18a152169f4e5a7f4951548"} Dec 03 00:21:29 crc kubenswrapper[4805]: I1203 00:21:29.657372 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"d3e3ec7b-4e1c-433d-8213-8a206a48bc99","Type":"ContainerStarted","Data":"10689617bdd036bc3b6238e025815b38056e4537acbd6ba775089deaed79f94f"} Dec 03 00:21:29 crc kubenswrapper[4805]: I1203 00:21:29.657519 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="d3e3ec7b-4e1c-433d-8213-8a206a48bc99" containerName="manage-dockerfile" containerID="cri-o://10689617bdd036bc3b6238e025815b38056e4537acbd6ba775089deaed79f94f" gracePeriod=30 Dec 03 00:21:29 crc kubenswrapper[4805]: E1203 00:21:29.659855 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df\\\"\"" pod="cert-manager/cert-manager-webhook-f4fb5df64-bw2pp" podUID="e4899534-51b2-4ed8-81e4-ea8a7c0f55ac" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.095296 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_d3e3ec7b-4e1c-433d-8213-8a206a48bc99/manage-dockerfile/0.log" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.095383 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.238016 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-builder-dockercfg-fwd7j-pull\") pod \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.238069 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-container-storage-root\") pod \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.238118 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-container-storage-run\") pod \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.238144 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-buildworkdir\") pod \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.238181 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-buildcachedir\") pod \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.238258 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-node-pullsecrets\") pod \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.238291 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-blob-cache\") pod \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.238383 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "d3e3ec7b-4e1c-433d-8213-8a206a48bc99" (UID: "d3e3ec7b-4e1c-433d-8213-8a206a48bc99"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.238453 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "d3e3ec7b-4e1c-433d-8213-8a206a48bc99" (UID: "d3e3ec7b-4e1c-433d-8213-8a206a48bc99"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.238703 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "d3e3ec7b-4e1c-433d-8213-8a206a48bc99" (UID: "d3e3ec7b-4e1c-433d-8213-8a206a48bc99"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.238750 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "d3e3ec7b-4e1c-433d-8213-8a206a48bc99" (UID: "d3e3ec7b-4e1c-433d-8213-8a206a48bc99"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.238922 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "d3e3ec7b-4e1c-433d-8213-8a206a48bc99" (UID: "d3e3ec7b-4e1c-433d-8213-8a206a48bc99"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.238953 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "d3e3ec7b-4e1c-433d-8213-8a206a48bc99" (UID: "d3e3ec7b-4e1c-433d-8213-8a206a48bc99"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.238985 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-system-configs\") pod \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.239039 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-proxy-ca-bundles\") pod \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.239063 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-ca-bundles\") pod \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.239356 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "d3e3ec7b-4e1c-433d-8213-8a206a48bc99" (UID: "d3e3ec7b-4e1c-433d-8213-8a206a48bc99"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.239396 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "d3e3ec7b-4e1c-433d-8213-8a206a48bc99" (UID: "d3e3ec7b-4e1c-433d-8213-8a206a48bc99"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.239473 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlnfk\" (UniqueName: \"kubernetes.io/projected/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-kube-api-access-zlnfk\") pod \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.239518 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-builder-dockercfg-fwd7j-push\") pod \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\" (UID: \"d3e3ec7b-4e1c-433d-8213-8a206a48bc99\") " Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.239611 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "d3e3ec7b-4e1c-433d-8213-8a206a48bc99" (UID: "d3e3ec7b-4e1c-433d-8213-8a206a48bc99"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.239788 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.239806 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.239815 4805 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.239825 4805 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.239833 4805 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.239842 4805 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.239851 4805 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.239860 4805 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.239868 4805 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.244210 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-builder-dockercfg-fwd7j-push" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-push") pod "d3e3ec7b-4e1c-433d-8213-8a206a48bc99" (UID: "d3e3ec7b-4e1c-433d-8213-8a206a48bc99"). InnerVolumeSpecName "builder-dockercfg-fwd7j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.245057 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-builder-dockercfg-fwd7j-pull" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-pull") pod "d3e3ec7b-4e1c-433d-8213-8a206a48bc99" (UID: "d3e3ec7b-4e1c-433d-8213-8a206a48bc99"). InnerVolumeSpecName "builder-dockercfg-fwd7j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.245658 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-kube-api-access-zlnfk" (OuterVolumeSpecName: "kube-api-access-zlnfk") pod "d3e3ec7b-4e1c-433d-8213-8a206a48bc99" (UID: "d3e3ec7b-4e1c-433d-8213-8a206a48bc99"). InnerVolumeSpecName "kube-api-access-zlnfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.341147 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-builder-dockercfg-fwd7j-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.341182 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-builder-dockercfg-fwd7j-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.341207 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlnfk\" (UniqueName: \"kubernetes.io/projected/d3e3ec7b-4e1c-433d-8213-8a206a48bc99-kube-api-access-zlnfk\") on node \"crc\" DevicePath \"\"" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.664760 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-nplwz" event={"ID":"62abc6ca-92f7-46ac-b22d-333dfa327640","Type":"ContainerStarted","Data":"f1867c2ce063b0fd65b35ce6353b96b11f8cce1a0173eef4a0cb86ff372743c9"} Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.666055 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2w8gg" event={"ID":"76757abd-1789-4cfa-b495-18540305dd10","Type":"ContainerStarted","Data":"48b804859aa13abcd6960f398dba1c5f297e928b435c53ae77097a9cac585aa0"} Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.667334 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_d3e3ec7b-4e1c-433d-8213-8a206a48bc99/manage-dockerfile/0.log" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.667396 4805 generic.go:334] "Generic (PLEG): container finished" podID="d3e3ec7b-4e1c-433d-8213-8a206a48bc99" containerID="10689617bdd036bc3b6238e025815b38056e4537acbd6ba775089deaed79f94f" exitCode=1 Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.667563 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"d3e3ec7b-4e1c-433d-8213-8a206a48bc99","Type":"ContainerDied","Data":"10689617bdd036bc3b6238e025815b38056e4537acbd6ba775089deaed79f94f"} Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.667610 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"d3e3ec7b-4e1c-433d-8213-8a206a48bc99","Type":"ContainerDied","Data":"5eaa9e7107c1928e35c967bbc15ffb551af2f132c4b6233345efd5b686480240"} Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.667636 4805 scope.go:117] "RemoveContainer" containerID="10689617bdd036bc3b6238e025815b38056e4537acbd6ba775089deaed79f94f" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.667724 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.706962 4805 scope.go:117] "RemoveContainer" containerID="10689617bdd036bc3b6238e025815b38056e4537acbd6ba775089deaed79f94f" Dec 03 00:21:30 crc kubenswrapper[4805]: E1203 00:21:30.712653 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10689617bdd036bc3b6238e025815b38056e4537acbd6ba775089deaed79f94f\": container with ID starting with 10689617bdd036bc3b6238e025815b38056e4537acbd6ba775089deaed79f94f not found: ID does not exist" containerID="10689617bdd036bc3b6238e025815b38056e4537acbd6ba775089deaed79f94f" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.712697 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10689617bdd036bc3b6238e025815b38056e4537acbd6ba775089deaed79f94f"} err="failed to get container status \"10689617bdd036bc3b6238e025815b38056e4537acbd6ba775089deaed79f94f\": rpc error: code = NotFound desc = could not find container \"10689617bdd036bc3b6238e025815b38056e4537acbd6ba775089deaed79f94f\": container with ID starting with 10689617bdd036bc3b6238e025815b38056e4537acbd6ba775089deaed79f94f not found: ID does not exist" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.742898 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-nplwz" podStartSLOduration=10.969165746 podStartE2EDuration="11.742874091s" podCreationTimestamp="2025-12-03 00:21:19 +0000 UTC" firstStartedPulling="2025-12-03 00:21:29.155866476 +0000 UTC m=+913.004829082" lastFinishedPulling="2025-12-03 00:21:29.929574821 +0000 UTC m=+913.778537427" observedRunningTime="2025-12-03 00:21:30.72853706 +0000 UTC m=+914.577499666" watchObservedRunningTime="2025-12-03 00:21:30.742874091 +0000 UTC m=+914.591836697" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.760225 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2w8gg" podStartSLOduration=13.93702782 podStartE2EDuration="14.760177156s" podCreationTimestamp="2025-12-03 00:21:16 +0000 UTC" firstStartedPulling="2025-12-03 00:21:29.110271251 +0000 UTC m=+912.959233857" lastFinishedPulling="2025-12-03 00:21:29.933420577 +0000 UTC m=+913.782383193" observedRunningTime="2025-12-03 00:21:30.757375025 +0000 UTC m=+914.606337641" watchObservedRunningTime="2025-12-03 00:21:30.760177156 +0000 UTC m=+914.609139762" Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.850805 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 03 00:21:30 crc kubenswrapper[4805]: I1203 00:21:30.859013 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 03 00:21:32 crc kubenswrapper[4805]: I1203 00:21:32.218683 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 03 00:21:32 crc kubenswrapper[4805]: I1203 00:21:32.431743 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3e3ec7b-4e1c-433d-8213-8a206a48bc99" path="/var/lib/kubelet/pods/d3e3ec7b-4e1c-433d-8213-8a206a48bc99/volumes" Dec 03 00:21:38 crc kubenswrapper[4805]: I1203 00:21:38.727128 4805 generic.go:334] "Generic (PLEG): container finished" podID="3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d" containerID="140611b8d0188f4e88ec4fd1707e6345efead5b7fe08fbf2533d2720c2f612f5" exitCode=0 Dec 03 00:21:38 crc kubenswrapper[4805]: I1203 00:21:38.727227 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d","Type":"ContainerDied","Data":"140611b8d0188f4e88ec4fd1707e6345efead5b7fe08fbf2533d2720c2f612f5"} Dec 03 00:21:40 crc kubenswrapper[4805]: I1203 00:21:40.746060 4805 generic.go:334] "Generic (PLEG): container finished" podID="3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d" containerID="3275c0edca8a41efe29643f3fc0926088627697d34191378829de124488a126e" exitCode=0 Dec 03 00:21:40 crc kubenswrapper[4805]: I1203 00:21:40.746141 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d","Type":"ContainerDied","Data":"3275c0edca8a41efe29643f3fc0926088627697d34191378829de124488a126e"} Dec 03 00:21:40 crc kubenswrapper[4805]: I1203 00:21:40.785755 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d/manage-dockerfile/0.log" Dec 03 00:21:41 crc kubenswrapper[4805]: I1203 00:21:41.760149 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d","Type":"ContainerStarted","Data":"1fc1fdbacb76d4013e908464095ef7d5ff0efa05ef4891b93fa65d99aaec55be"} Dec 03 00:21:41 crc kubenswrapper[4805]: I1203 00:21:41.793372 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=22.79335 podStartE2EDuration="22.79335s" podCreationTimestamp="2025-12-03 00:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:21:41.783564934 +0000 UTC m=+925.632527540" watchObservedRunningTime="2025-12-03 00:21:41.79335 +0000 UTC m=+925.642312606" Dec 03 00:21:46 crc kubenswrapper[4805]: I1203 00:21:46.805815 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-bw2pp" event={"ID":"e4899534-51b2-4ed8-81e4-ea8a7c0f55ac","Type":"ContainerStarted","Data":"942ad0da9c4e263bcd1b3245f0f1940f2512e3030d05dd6d73461b7c3c3f348b"} Dec 03 00:21:46 crc kubenswrapper[4805]: I1203 00:21:46.806634 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-bw2pp" Dec 03 00:21:46 crc kubenswrapper[4805]: I1203 00:21:46.829312 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-bw2pp" podStartSLOduration=-9223372000.025488 podStartE2EDuration="36.829287003s" podCreationTimestamp="2025-12-03 00:21:10 +0000 UTC" firstStartedPulling="2025-12-03 00:21:11.473403413 +0000 UTC m=+895.322366019" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:21:46.822943753 +0000 UTC m=+930.671906379" watchObservedRunningTime="2025-12-03 00:21:46.829287003 +0000 UTC m=+930.678249629" Dec 03 00:21:51 crc kubenswrapper[4805]: I1203 00:21:51.060651 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-bw2pp" Dec 03 00:22:41 crc kubenswrapper[4805]: I1203 00:22:41.053069 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hbct9"] Dec 03 00:22:41 crc kubenswrapper[4805]: E1203 00:22:41.056112 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3e3ec7b-4e1c-433d-8213-8a206a48bc99" containerName="manage-dockerfile" Dec 03 00:22:41 crc kubenswrapper[4805]: I1203 00:22:41.056578 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3e3ec7b-4e1c-433d-8213-8a206a48bc99" containerName="manage-dockerfile" Dec 03 00:22:41 crc kubenswrapper[4805]: I1203 00:22:41.056876 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3e3ec7b-4e1c-433d-8213-8a206a48bc99" containerName="manage-dockerfile" Dec 03 00:22:41 crc kubenswrapper[4805]: I1203 00:22:41.058099 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbct9" Dec 03 00:22:41 crc kubenswrapper[4805]: I1203 00:22:41.070815 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hbct9"] Dec 03 00:22:41 crc kubenswrapper[4805]: I1203 00:22:41.135044 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdw5c\" (UniqueName: \"kubernetes.io/projected/454d6749-e4b6-4e23-9c10-a8390f2bc2e4-kube-api-access-fdw5c\") pod \"community-operators-hbct9\" (UID: \"454d6749-e4b6-4e23-9c10-a8390f2bc2e4\") " pod="openshift-marketplace/community-operators-hbct9" Dec 03 00:22:41 crc kubenswrapper[4805]: I1203 00:22:41.135217 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/454d6749-e4b6-4e23-9c10-a8390f2bc2e4-utilities\") pod \"community-operators-hbct9\" (UID: \"454d6749-e4b6-4e23-9c10-a8390f2bc2e4\") " pod="openshift-marketplace/community-operators-hbct9" Dec 03 00:22:41 crc kubenswrapper[4805]: I1203 00:22:41.135247 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/454d6749-e4b6-4e23-9c10-a8390f2bc2e4-catalog-content\") pod \"community-operators-hbct9\" (UID: \"454d6749-e4b6-4e23-9c10-a8390f2bc2e4\") " pod="openshift-marketplace/community-operators-hbct9" Dec 03 00:22:41 crc kubenswrapper[4805]: I1203 00:22:41.237132 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/454d6749-e4b6-4e23-9c10-a8390f2bc2e4-utilities\") pod \"community-operators-hbct9\" (UID: \"454d6749-e4b6-4e23-9c10-a8390f2bc2e4\") " pod="openshift-marketplace/community-operators-hbct9" Dec 03 00:22:41 crc kubenswrapper[4805]: I1203 00:22:41.237183 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/454d6749-e4b6-4e23-9c10-a8390f2bc2e4-catalog-content\") pod \"community-operators-hbct9\" (UID: \"454d6749-e4b6-4e23-9c10-a8390f2bc2e4\") " pod="openshift-marketplace/community-operators-hbct9" Dec 03 00:22:41 crc kubenswrapper[4805]: I1203 00:22:41.237230 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdw5c\" (UniqueName: \"kubernetes.io/projected/454d6749-e4b6-4e23-9c10-a8390f2bc2e4-kube-api-access-fdw5c\") pod \"community-operators-hbct9\" (UID: \"454d6749-e4b6-4e23-9c10-a8390f2bc2e4\") " pod="openshift-marketplace/community-operators-hbct9" Dec 03 00:22:41 crc kubenswrapper[4805]: I1203 00:22:41.237876 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/454d6749-e4b6-4e23-9c10-a8390f2bc2e4-catalog-content\") pod \"community-operators-hbct9\" (UID: \"454d6749-e4b6-4e23-9c10-a8390f2bc2e4\") " pod="openshift-marketplace/community-operators-hbct9" Dec 03 00:22:41 crc kubenswrapper[4805]: I1203 00:22:41.237968 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/454d6749-e4b6-4e23-9c10-a8390f2bc2e4-utilities\") pod \"community-operators-hbct9\" (UID: \"454d6749-e4b6-4e23-9c10-a8390f2bc2e4\") " pod="openshift-marketplace/community-operators-hbct9" Dec 03 00:22:41 crc kubenswrapper[4805]: I1203 00:22:41.259418 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdw5c\" (UniqueName: \"kubernetes.io/projected/454d6749-e4b6-4e23-9c10-a8390f2bc2e4-kube-api-access-fdw5c\") pod \"community-operators-hbct9\" (UID: \"454d6749-e4b6-4e23-9c10-a8390f2bc2e4\") " pod="openshift-marketplace/community-operators-hbct9" Dec 03 00:22:41 crc kubenswrapper[4805]: I1203 00:22:41.378461 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbct9" Dec 03 00:22:41 crc kubenswrapper[4805]: I1203 00:22:41.751398 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hbct9"] Dec 03 00:22:42 crc kubenswrapper[4805]: I1203 00:22:42.230838 4805 generic.go:334] "Generic (PLEG): container finished" podID="454d6749-e4b6-4e23-9c10-a8390f2bc2e4" containerID="26251211431a6723f8842cfe951c55bd29833c716af1cf1e2aeba5ebf44c965b" exitCode=0 Dec 03 00:22:42 crc kubenswrapper[4805]: I1203 00:22:42.231100 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbct9" event={"ID":"454d6749-e4b6-4e23-9c10-a8390f2bc2e4","Type":"ContainerDied","Data":"26251211431a6723f8842cfe951c55bd29833c716af1cf1e2aeba5ebf44c965b"} Dec 03 00:22:42 crc kubenswrapper[4805]: I1203 00:22:42.231343 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbct9" event={"ID":"454d6749-e4b6-4e23-9c10-a8390f2bc2e4","Type":"ContainerStarted","Data":"37bf9a277f525249c5b56ff9c1d4a5006d592c24cd805bf6e0b33954b1dfd80a"} Dec 03 00:22:44 crc kubenswrapper[4805]: I1203 00:22:44.248312 4805 generic.go:334] "Generic (PLEG): container finished" podID="454d6749-e4b6-4e23-9c10-a8390f2bc2e4" containerID="e14113791f6e6c8bfc62f10124c7440d3f6b8c3ed99acab57a7082a1df9b5089" exitCode=0 Dec 03 00:22:44 crc kubenswrapper[4805]: I1203 00:22:44.248377 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbct9" event={"ID":"454d6749-e4b6-4e23-9c10-a8390f2bc2e4","Type":"ContainerDied","Data":"e14113791f6e6c8bfc62f10124c7440d3f6b8c3ed99acab57a7082a1df9b5089"} Dec 03 00:22:45 crc kubenswrapper[4805]: I1203 00:22:45.256335 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbct9" event={"ID":"454d6749-e4b6-4e23-9c10-a8390f2bc2e4","Type":"ContainerStarted","Data":"adcb12f965b0b07d41871c70ecfd569b343c03f6e4e4afde99aaac66c3fdfb71"} Dec 03 00:22:47 crc kubenswrapper[4805]: I1203 00:22:47.811220 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:22:47 crc kubenswrapper[4805]: I1203 00:22:47.811597 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:22:51 crc kubenswrapper[4805]: I1203 00:22:51.379006 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hbct9" Dec 03 00:22:51 crc kubenswrapper[4805]: I1203 00:22:51.379583 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hbct9" Dec 03 00:22:51 crc kubenswrapper[4805]: I1203 00:22:51.453421 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hbct9" Dec 03 00:22:51 crc kubenswrapper[4805]: I1203 00:22:51.498534 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hbct9" podStartSLOduration=7.996804027 podStartE2EDuration="10.49850579s" podCreationTimestamp="2025-12-03 00:22:41 +0000 UTC" firstStartedPulling="2025-12-03 00:22:42.235365284 +0000 UTC m=+986.084327930" lastFinishedPulling="2025-12-03 00:22:44.737067047 +0000 UTC m=+988.586029693" observedRunningTime="2025-12-03 00:22:45.279865183 +0000 UTC m=+989.128827789" watchObservedRunningTime="2025-12-03 00:22:51.49850579 +0000 UTC m=+995.347468406" Dec 03 00:22:52 crc kubenswrapper[4805]: I1203 00:22:52.348060 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hbct9" Dec 03 00:22:52 crc kubenswrapper[4805]: I1203 00:22:52.389543 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hbct9"] Dec 03 00:22:54 crc kubenswrapper[4805]: I1203 00:22:54.315584 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hbct9" podUID="454d6749-e4b6-4e23-9c10-a8390f2bc2e4" containerName="registry-server" containerID="cri-o://adcb12f965b0b07d41871c70ecfd569b343c03f6e4e4afde99aaac66c3fdfb71" gracePeriod=2 Dec 03 00:22:55 crc kubenswrapper[4805]: I1203 00:22:55.325261 4805 generic.go:334] "Generic (PLEG): container finished" podID="454d6749-e4b6-4e23-9c10-a8390f2bc2e4" containerID="adcb12f965b0b07d41871c70ecfd569b343c03f6e4e4afde99aaac66c3fdfb71" exitCode=0 Dec 03 00:22:55 crc kubenswrapper[4805]: I1203 00:22:55.325334 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbct9" event={"ID":"454d6749-e4b6-4e23-9c10-a8390f2bc2e4","Type":"ContainerDied","Data":"adcb12f965b0b07d41871c70ecfd569b343c03f6e4e4afde99aaac66c3fdfb71"} Dec 03 00:22:55 crc kubenswrapper[4805]: I1203 00:22:55.826958 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbct9" Dec 03 00:22:55 crc kubenswrapper[4805]: I1203 00:22:55.899085 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/454d6749-e4b6-4e23-9c10-a8390f2bc2e4-utilities\") pod \"454d6749-e4b6-4e23-9c10-a8390f2bc2e4\" (UID: \"454d6749-e4b6-4e23-9c10-a8390f2bc2e4\") " Dec 03 00:22:55 crc kubenswrapper[4805]: I1203 00:22:55.899210 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdw5c\" (UniqueName: \"kubernetes.io/projected/454d6749-e4b6-4e23-9c10-a8390f2bc2e4-kube-api-access-fdw5c\") pod \"454d6749-e4b6-4e23-9c10-a8390f2bc2e4\" (UID: \"454d6749-e4b6-4e23-9c10-a8390f2bc2e4\") " Dec 03 00:22:55 crc kubenswrapper[4805]: I1203 00:22:55.899248 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/454d6749-e4b6-4e23-9c10-a8390f2bc2e4-catalog-content\") pod \"454d6749-e4b6-4e23-9c10-a8390f2bc2e4\" (UID: \"454d6749-e4b6-4e23-9c10-a8390f2bc2e4\") " Dec 03 00:22:55 crc kubenswrapper[4805]: I1203 00:22:55.900267 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/454d6749-e4b6-4e23-9c10-a8390f2bc2e4-utilities" (OuterVolumeSpecName: "utilities") pod "454d6749-e4b6-4e23-9c10-a8390f2bc2e4" (UID: "454d6749-e4b6-4e23-9c10-a8390f2bc2e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:22:55 crc kubenswrapper[4805]: I1203 00:22:55.907480 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/454d6749-e4b6-4e23-9c10-a8390f2bc2e4-kube-api-access-fdw5c" (OuterVolumeSpecName: "kube-api-access-fdw5c") pod "454d6749-e4b6-4e23-9c10-a8390f2bc2e4" (UID: "454d6749-e4b6-4e23-9c10-a8390f2bc2e4"). InnerVolumeSpecName "kube-api-access-fdw5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:22:55 crc kubenswrapper[4805]: I1203 00:22:55.957588 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/454d6749-e4b6-4e23-9c10-a8390f2bc2e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "454d6749-e4b6-4e23-9c10-a8390f2bc2e4" (UID: "454d6749-e4b6-4e23-9c10-a8390f2bc2e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:22:56 crc kubenswrapper[4805]: I1203 00:22:56.000915 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdw5c\" (UniqueName: \"kubernetes.io/projected/454d6749-e4b6-4e23-9c10-a8390f2bc2e4-kube-api-access-fdw5c\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:56 crc kubenswrapper[4805]: I1203 00:22:56.000978 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/454d6749-e4b6-4e23-9c10-a8390f2bc2e4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:56 crc kubenswrapper[4805]: I1203 00:22:56.001002 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/454d6749-e4b6-4e23-9c10-a8390f2bc2e4-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:22:56 crc kubenswrapper[4805]: I1203 00:22:56.339239 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbct9" event={"ID":"454d6749-e4b6-4e23-9c10-a8390f2bc2e4","Type":"ContainerDied","Data":"37bf9a277f525249c5b56ff9c1d4a5006d592c24cd805bf6e0b33954b1dfd80a"} Dec 03 00:22:56 crc kubenswrapper[4805]: I1203 00:22:56.339322 4805 scope.go:117] "RemoveContainer" containerID="adcb12f965b0b07d41871c70ecfd569b343c03f6e4e4afde99aaac66c3fdfb71" Dec 03 00:22:56 crc kubenswrapper[4805]: I1203 00:22:56.339568 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbct9" Dec 03 00:22:56 crc kubenswrapper[4805]: I1203 00:22:56.418703 4805 scope.go:117] "RemoveContainer" containerID="e14113791f6e6c8bfc62f10124c7440d3f6b8c3ed99acab57a7082a1df9b5089" Dec 03 00:22:56 crc kubenswrapper[4805]: I1203 00:22:56.439242 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hbct9"] Dec 03 00:22:56 crc kubenswrapper[4805]: I1203 00:22:56.439301 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hbct9"] Dec 03 00:22:56 crc kubenswrapper[4805]: I1203 00:22:56.451141 4805 scope.go:117] "RemoveContainer" containerID="26251211431a6723f8842cfe951c55bd29833c716af1cf1e2aeba5ebf44c965b" Dec 03 00:22:58 crc kubenswrapper[4805]: I1203 00:22:58.431249 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="454d6749-e4b6-4e23-9c10-a8390f2bc2e4" path="/var/lib/kubelet/pods/454d6749-e4b6-4e23-9c10-a8390f2bc2e4/volumes" Dec 03 00:23:17 crc kubenswrapper[4805]: I1203 00:23:17.811754 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:23:17 crc kubenswrapper[4805]: I1203 00:23:17.812758 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:23:34 crc kubenswrapper[4805]: I1203 00:23:34.701140 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-hsx6d" podUID="b90bcdd7-2155-4f6f-bad9-19cea6e78c63" containerName="registry-server" probeResult="failure" output=< Dec 03 00:23:34 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Dec 03 00:23:34 crc kubenswrapper[4805]: > Dec 03 00:23:34 crc kubenswrapper[4805]: I1203 00:23:34.715669 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-hsx6d" podUID="b90bcdd7-2155-4f6f-bad9-19cea6e78c63" containerName="registry-server" probeResult="failure" output=< Dec 03 00:23:34 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Dec 03 00:23:34 crc kubenswrapper[4805]: > Dec 03 00:23:41 crc kubenswrapper[4805]: I1203 00:23:41.820145 4805 generic.go:334] "Generic (PLEG): container finished" podID="3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d" containerID="1fc1fdbacb76d4013e908464095ef7d5ff0efa05ef4891b93fa65d99aaec55be" exitCode=0 Dec 03 00:23:41 crc kubenswrapper[4805]: I1203 00:23:41.820358 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d","Type":"ContainerDied","Data":"1fc1fdbacb76d4013e908464095ef7d5ff0efa05ef4891b93fa65d99aaec55be"} Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.082800 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.221125 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-blob-cache\") pod \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.221177 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-buildcachedir\") pod \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.221275 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-system-configs\") pod \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.221298 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d" (UID: "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.221318 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-proxy-ca-bundles\") pod \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.221350 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-container-storage-run\") pod \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.221373 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-builder-dockercfg-fwd7j-push\") pod \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.221408 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-ca-bundles\") pod \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.221432 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-builder-dockercfg-fwd7j-pull\") pod \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.221456 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-buildworkdir\") pod \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.221493 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px4pm\" (UniqueName: \"kubernetes.io/projected/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-kube-api-access-px4pm\") pod \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.221527 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-container-storage-root\") pod \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.221552 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-node-pullsecrets\") pod \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\" (UID: \"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d\") " Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.221803 4805 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.221834 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d" (UID: "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.222092 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d" (UID: "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.222224 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d" (UID: "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.223598 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d" (UID: "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.223993 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d" (UID: "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.229872 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-builder-dockercfg-fwd7j-pull" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-pull") pod "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d" (UID: "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d"). InnerVolumeSpecName "builder-dockercfg-fwd7j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.229940 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-kube-api-access-px4pm" (OuterVolumeSpecName: "kube-api-access-px4pm") pod "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d" (UID: "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d"). InnerVolumeSpecName "kube-api-access-px4pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.230031 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-builder-dockercfg-fwd7j-push" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-push") pod "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d" (UID: "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d"). InnerVolumeSpecName "builder-dockercfg-fwd7j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.262903 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d" (UID: "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.323627 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px4pm\" (UniqueName: \"kubernetes.io/projected/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-kube-api-access-px4pm\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.323678 4805 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.323692 4805 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.323704 4805 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.323718 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.323730 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-builder-dockercfg-fwd7j-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.323742 4805 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.323756 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-builder-dockercfg-fwd7j-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.323768 4805 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.394561 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d" (UID: "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.425280 4805 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.834168 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d","Type":"ContainerDied","Data":"13cb98b2bd5ae7a93c833903fb9e393ec9b51c0fec71095f1c0d7f85aac94e9e"} Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.834241 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13cb98b2bd5ae7a93c833903fb9e393ec9b51c0fec71095f1c0d7f85aac94e9e" Dec 03 00:23:43 crc kubenswrapper[4805]: I1203 00:23:43.834306 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 03 00:23:45 crc kubenswrapper[4805]: I1203 00:23:45.635319 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d" (UID: "3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:23:45 crc kubenswrapper[4805]: I1203 00:23:45.664001 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:23:47 crc kubenswrapper[4805]: I1203 00:23:47.810947 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:23:47 crc kubenswrapper[4805]: I1203 00:23:47.811049 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:23:47 crc kubenswrapper[4805]: I1203 00:23:47.811128 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:23:47 crc kubenswrapper[4805]: I1203 00:23:47.812168 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"338a7cc0895913e63346fa19d3b54987d60ec7ea8c91dbfbecd7727b32876440"} pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:23:47 crc kubenswrapper[4805]: I1203 00:23:47.812315 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" containerID="cri-o://338a7cc0895913e63346fa19d3b54987d60ec7ea8c91dbfbecd7727b32876440" gracePeriod=600 Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.157262 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 03 00:23:48 crc kubenswrapper[4805]: E1203 00:23:48.158273 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454d6749-e4b6-4e23-9c10-a8390f2bc2e4" containerName="registry-server" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.158425 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="454d6749-e4b6-4e23-9c10-a8390f2bc2e4" containerName="registry-server" Dec 03 00:23:48 crc kubenswrapper[4805]: E1203 00:23:48.158514 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d" containerName="docker-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.158588 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d" containerName="docker-build" Dec 03 00:23:48 crc kubenswrapper[4805]: E1203 00:23:48.158665 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d" containerName="git-clone" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.158738 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d" containerName="git-clone" Dec 03 00:23:48 crc kubenswrapper[4805]: E1203 00:23:48.158820 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d" containerName="manage-dockerfile" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.158896 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d" containerName="manage-dockerfile" Dec 03 00:23:48 crc kubenswrapper[4805]: E1203 00:23:48.158981 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454d6749-e4b6-4e23-9c10-a8390f2bc2e4" containerName="extract-content" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.159047 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="454d6749-e4b6-4e23-9c10-a8390f2bc2e4" containerName="extract-content" Dec 03 00:23:48 crc kubenswrapper[4805]: E1203 00:23:48.159118 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454d6749-e4b6-4e23-9c10-a8390f2bc2e4" containerName="extract-utilities" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.159176 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="454d6749-e4b6-4e23-9c10-a8390f2bc2e4" containerName="extract-utilities" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.159369 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="454d6749-e4b6-4e23-9c10-a8390f2bc2e4" containerName="registry-server" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.159437 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ed48b3b-6ef2-4c7b-a245-7b6ec3fa147d" containerName="docker-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.160306 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.163264 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-sys-config" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.163296 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-fwd7j" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.163334 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-ca" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.163335 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-global-ca" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.169389 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.327464 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.327518 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqbkp\" (UniqueName: \"kubernetes.io/projected/85bc1596-4e47-449d-87d2-73a239ae8070-kube-api-access-fqbkp\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.327547 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/85bc1596-4e47-449d-87d2-73a239ae8070-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.327568 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/85bc1596-4e47-449d-87d2-73a239ae8070-builder-dockercfg-fwd7j-push\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.327596 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85bc1596-4e47-449d-87d2-73a239ae8070-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.327802 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/85bc1596-4e47-449d-87d2-73a239ae8070-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.327884 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.327978 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/85bc1596-4e47-449d-87d2-73a239ae8070-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.328003 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/85bc1596-4e47-449d-87d2-73a239ae8070-builder-dockercfg-fwd7j-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.328052 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85bc1596-4e47-449d-87d2-73a239ae8070-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.328118 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.328154 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.429039 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqbkp\" (UniqueName: \"kubernetes.io/projected/85bc1596-4e47-449d-87d2-73a239ae8070-kube-api-access-fqbkp\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.429090 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/85bc1596-4e47-449d-87d2-73a239ae8070-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.429111 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/85bc1596-4e47-449d-87d2-73a239ae8070-builder-dockercfg-fwd7j-push\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.429135 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85bc1596-4e47-449d-87d2-73a239ae8070-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.429170 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/85bc1596-4e47-449d-87d2-73a239ae8070-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.429178 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/85bc1596-4e47-449d-87d2-73a239ae8070-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.429191 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.429413 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/85bc1596-4e47-449d-87d2-73a239ae8070-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.429470 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/85bc1596-4e47-449d-87d2-73a239ae8070-builder-dockercfg-fwd7j-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.429493 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85bc1596-4e47-449d-87d2-73a239ae8070-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.429530 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.429554 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/85bc1596-4e47-449d-87d2-73a239ae8070-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.429557 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.429649 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.429748 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.429833 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/85bc1596-4e47-449d-87d2-73a239ae8070-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.429934 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.430058 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.430178 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.430375 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85bc1596-4e47-449d-87d2-73a239ae8070-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.430464 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85bc1596-4e47-449d-87d2-73a239ae8070-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.446854 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/85bc1596-4e47-449d-87d2-73a239ae8070-builder-dockercfg-fwd7j-push\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.446854 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/85bc1596-4e47-449d-87d2-73a239ae8070-builder-dockercfg-fwd7j-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.449494 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqbkp\" (UniqueName: \"kubernetes.io/projected/85bc1596-4e47-449d-87d2-73a239ae8070-kube-api-access-fqbkp\") pod \"smart-gateway-operator-1-build\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.475879 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.720355 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 03 00:23:48 crc kubenswrapper[4805]: W1203 00:23:48.724503 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85bc1596_4e47_449d_87d2_73a239ae8070.slice/crio-9690152efde7f1acbd0b68973775c2d6d9b954bf34cdc94218d692af61a486c0 WatchSource:0}: Error finding container 9690152efde7f1acbd0b68973775c2d6d9b954bf34cdc94218d692af61a486c0: Status 404 returned error can't find the container with id 9690152efde7f1acbd0b68973775c2d6d9b954bf34cdc94218d692af61a486c0 Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.869396 4805 generic.go:334] "Generic (PLEG): container finished" podID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerID="338a7cc0895913e63346fa19d3b54987d60ec7ea8c91dbfbecd7727b32876440" exitCode=0 Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.869482 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" event={"ID":"42d6da4d-d781-4243-b5c3-28a8cf91ef53","Type":"ContainerDied","Data":"338a7cc0895913e63346fa19d3b54987d60ec7ea8c91dbfbecd7727b32876440"} Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.869569 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" event={"ID":"42d6da4d-d781-4243-b5c3-28a8cf91ef53","Type":"ContainerStarted","Data":"a6a15cf13acceb9a934b2889a2fbcde236ba182391e642017c50de23b8e1efc7"} Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.869602 4805 scope.go:117] "RemoveContainer" containerID="c1e997872ae5cc6752d7b081f3a651fb9d62664b89bdfb81c87803944fd10204" Dec 03 00:23:48 crc kubenswrapper[4805]: I1203 00:23:48.870488 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"85bc1596-4e47-449d-87d2-73a239ae8070","Type":"ContainerStarted","Data":"9690152efde7f1acbd0b68973775c2d6d9b954bf34cdc94218d692af61a486c0"} Dec 03 00:23:49 crc kubenswrapper[4805]: I1203 00:23:49.881725 4805 generic.go:334] "Generic (PLEG): container finished" podID="85bc1596-4e47-449d-87d2-73a239ae8070" containerID="2bdf512d2f02db8349b59416ac393caad6872fe8445849f05c638230030cd86e" exitCode=0 Dec 03 00:23:49 crc kubenswrapper[4805]: I1203 00:23:49.881796 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"85bc1596-4e47-449d-87d2-73a239ae8070","Type":"ContainerDied","Data":"2bdf512d2f02db8349b59416ac393caad6872fe8445849f05c638230030cd86e"} Dec 03 00:23:50 crc kubenswrapper[4805]: I1203 00:23:50.909578 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"85bc1596-4e47-449d-87d2-73a239ae8070","Type":"ContainerStarted","Data":"04df8f646e58b1a5b480c09cebf689e700a22bf0ae58ca17a725f583ab2fd89e"} Dec 03 00:23:50 crc kubenswrapper[4805]: I1203 00:23:50.956697 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=2.956659208 podStartE2EDuration="2.956659208s" podCreationTimestamp="2025-12-03 00:23:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:23:50.94552782 +0000 UTC m=+1054.794490436" watchObservedRunningTime="2025-12-03 00:23:50.956659208 +0000 UTC m=+1054.805621854" Dec 03 00:23:58 crc kubenswrapper[4805]: I1203 00:23:58.773689 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 03 00:23:58 crc kubenswrapper[4805]: I1203 00:23:58.774730 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="85bc1596-4e47-449d-87d2-73a239ae8070" containerName="docker-build" containerID="cri-o://04df8f646e58b1a5b480c09cebf689e700a22bf0ae58ca17a725f583ab2fd89e" gracePeriod=30 Dec 03 00:23:59 crc kubenswrapper[4805]: I1203 00:23:59.993832 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_85bc1596-4e47-449d-87d2-73a239ae8070/docker-build/0.log" Dec 03 00:23:59 crc kubenswrapper[4805]: I1203 00:23:59.994564 4805 generic.go:334] "Generic (PLEG): container finished" podID="85bc1596-4e47-449d-87d2-73a239ae8070" containerID="04df8f646e58b1a5b480c09cebf689e700a22bf0ae58ca17a725f583ab2fd89e" exitCode=1 Dec 03 00:23:59 crc kubenswrapper[4805]: I1203 00:23:59.994635 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"85bc1596-4e47-449d-87d2-73a239ae8070","Type":"ContainerDied","Data":"04df8f646e58b1a5b480c09cebf689e700a22bf0ae58ca17a725f583ab2fd89e"} Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.297319 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_85bc1596-4e47-449d-87d2-73a239ae8070/docker-build/0.log" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.298321 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.400837 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Dec 03 00:24:00 crc kubenswrapper[4805]: E1203 00:24:00.401110 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85bc1596-4e47-449d-87d2-73a239ae8070" containerName="manage-dockerfile" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.401126 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="85bc1596-4e47-449d-87d2-73a239ae8070" containerName="manage-dockerfile" Dec 03 00:24:00 crc kubenswrapper[4805]: E1203 00:24:00.401137 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85bc1596-4e47-449d-87d2-73a239ae8070" containerName="docker-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.401143 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="85bc1596-4e47-449d-87d2-73a239ae8070" containerName="docker-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.401279 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="85bc1596-4e47-449d-87d2-73a239ae8070" containerName="docker-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.402101 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.404596 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-sys-config" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.406145 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-ca" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.406226 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-global-ca" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.439994 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqbkp\" (UniqueName: \"kubernetes.io/projected/85bc1596-4e47-449d-87d2-73a239ae8070-kube-api-access-fqbkp\") pod \"85bc1596-4e47-449d-87d2-73a239ae8070\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.440210 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-container-storage-run\") pod \"85bc1596-4e47-449d-87d2-73a239ae8070\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.440403 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-build-blob-cache\") pod \"85bc1596-4e47-449d-87d2-73a239ae8070\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.440636 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85bc1596-4e47-449d-87d2-73a239ae8070-build-proxy-ca-bundles\") pod \"85bc1596-4e47-449d-87d2-73a239ae8070\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.440871 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-buildworkdir\") pod \"85bc1596-4e47-449d-87d2-73a239ae8070\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.441087 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/85bc1596-4e47-449d-87d2-73a239ae8070-builder-dockercfg-fwd7j-push\") pod \"85bc1596-4e47-449d-87d2-73a239ae8070\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.441134 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/85bc1596-4e47-449d-87d2-73a239ae8070-node-pullsecrets\") pod \"85bc1596-4e47-449d-87d2-73a239ae8070\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.441360 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/85bc1596-4e47-449d-87d2-73a239ae8070-builder-dockercfg-fwd7j-pull\") pod \"85bc1596-4e47-449d-87d2-73a239ae8070\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.441626 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-container-storage-root\") pod \"85bc1596-4e47-449d-87d2-73a239ae8070\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.441678 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/85bc1596-4e47-449d-87d2-73a239ae8070-build-system-configs\") pod \"85bc1596-4e47-449d-87d2-73a239ae8070\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.441734 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85bc1596-4e47-449d-87d2-73a239ae8070-build-ca-bundles\") pod \"85bc1596-4e47-449d-87d2-73a239ae8070\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.442147 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/85bc1596-4e47-449d-87d2-73a239ae8070-buildcachedir\") pod \"85bc1596-4e47-449d-87d2-73a239ae8070\" (UID: \"85bc1596-4e47-449d-87d2-73a239ae8070\") " Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.446760 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "85bc1596-4e47-449d-87d2-73a239ae8070" (UID: "85bc1596-4e47-449d-87d2-73a239ae8070"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.447460 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85bc1596-4e47-449d-87d2-73a239ae8070-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "85bc1596-4e47-449d-87d2-73a239ae8070" (UID: "85bc1596-4e47-449d-87d2-73a239ae8070"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.453448 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85bc1596-4e47-449d-87d2-73a239ae8070-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "85bc1596-4e47-449d-87d2-73a239ae8070" (UID: "85bc1596-4e47-449d-87d2-73a239ae8070"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.454447 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85bc1596-4e47-449d-87d2-73a239ae8070-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "85bc1596-4e47-449d-87d2-73a239ae8070" (UID: "85bc1596-4e47-449d-87d2-73a239ae8070"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.454539 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85bc1596-4e47-449d-87d2-73a239ae8070-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "85bc1596-4e47-449d-87d2-73a239ae8070" (UID: "85bc1596-4e47-449d-87d2-73a239ae8070"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.454658 4805 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85bc1596-4e47-449d-87d2-73a239ae8070-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.454960 4805 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.456802 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "85bc1596-4e47-449d-87d2-73a239ae8070" (UID: "85bc1596-4e47-449d-87d2-73a239ae8070"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.458400 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85bc1596-4e47-449d-87d2-73a239ae8070-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "85bc1596-4e47-449d-87d2-73a239ae8070" (UID: "85bc1596-4e47-449d-87d2-73a239ae8070"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.473480 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85bc1596-4e47-449d-87d2-73a239ae8070-kube-api-access-fqbkp" (OuterVolumeSpecName: "kube-api-access-fqbkp") pod "85bc1596-4e47-449d-87d2-73a239ae8070" (UID: "85bc1596-4e47-449d-87d2-73a239ae8070"). InnerVolumeSpecName "kube-api-access-fqbkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.479261 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85bc1596-4e47-449d-87d2-73a239ae8070-builder-dockercfg-fwd7j-push" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-push") pod "85bc1596-4e47-449d-87d2-73a239ae8070" (UID: "85bc1596-4e47-449d-87d2-73a239ae8070"). InnerVolumeSpecName "builder-dockercfg-fwd7j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.491680 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.491828 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85bc1596-4e47-449d-87d2-73a239ae8070-builder-dockercfg-fwd7j-pull" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-pull") pod "85bc1596-4e47-449d-87d2-73a239ae8070" (UID: "85bc1596-4e47-449d-87d2-73a239ae8070"). InnerVolumeSpecName "builder-dockercfg-fwd7j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.555842 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.556020 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6tlp\" (UniqueName: \"kubernetes.io/projected/ac869363-c9e0-45a2-b19c-8c01d1598f66-kube-api-access-v6tlp\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.556075 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.556096 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/ac869363-c9e0-45a2-b19c-8c01d1598f66-builder-dockercfg-fwd7j-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.556155 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ac869363-c9e0-45a2-b19c-8c01d1598f66-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.556272 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.556303 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ac869363-c9e0-45a2-b19c-8c01d1598f66-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.556406 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/ac869363-c9e0-45a2-b19c-8c01d1598f66-builder-dockercfg-fwd7j-push\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.557848 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.557923 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.557963 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.557997 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.558180 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqbkp\" (UniqueName: \"kubernetes.io/projected/85bc1596-4e47-449d-87d2-73a239ae8070-kube-api-access-fqbkp\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.558242 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.558261 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/85bc1596-4e47-449d-87d2-73a239ae8070-builder-dockercfg-fwd7j-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.558279 4805 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/85bc1596-4e47-449d-87d2-73a239ae8070-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.558293 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/85bc1596-4e47-449d-87d2-73a239ae8070-builder-dockercfg-fwd7j-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.558310 4805 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/85bc1596-4e47-449d-87d2-73a239ae8070-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.558324 4805 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85bc1596-4e47-449d-87d2-73a239ae8070-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.558338 4805 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/85bc1596-4e47-449d-87d2-73a239ae8070-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.633581 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "85bc1596-4e47-449d-87d2-73a239ae8070" (UID: "85bc1596-4e47-449d-87d2-73a239ae8070"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.659700 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.659770 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ac869363-c9e0-45a2-b19c-8c01d1598f66-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.659810 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/ac869363-c9e0-45a2-b19c-8c01d1598f66-builder-dockercfg-fwd7j-push\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.659853 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.659877 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.659895 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.659909 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.659940 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.659966 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6tlp\" (UniqueName: \"kubernetes.io/projected/ac869363-c9e0-45a2-b19c-8c01d1598f66-kube-api-access-v6tlp\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.659987 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.660010 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/ac869363-c9e0-45a2-b19c-8c01d1598f66-builder-dockercfg-fwd7j-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.660032 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ac869363-c9e0-45a2-b19c-8c01d1598f66-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.660079 4805 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.660140 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ac869363-c9e0-45a2-b19c-8c01d1598f66-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.660920 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ac869363-c9e0-45a2-b19c-8c01d1598f66-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.661358 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.661956 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.662122 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.662418 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.662507 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.662761 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.662844 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.664735 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/ac869363-c9e0-45a2-b19c-8c01d1598f66-builder-dockercfg-fwd7j-push\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.664805 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/ac869363-c9e0-45a2-b19c-8c01d1598f66-builder-dockercfg-fwd7j-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.676442 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6tlp\" (UniqueName: \"kubernetes.io/projected/ac869363-c9e0-45a2-b19c-8c01d1598f66-kube-api-access-v6tlp\") pod \"smart-gateway-operator-2-build\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.715615 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.891758 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "85bc1596-4e47-449d-87d2-73a239ae8070" (UID: "85bc1596-4e47-449d-87d2-73a239ae8070"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.940616 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Dec 03 00:24:00 crc kubenswrapper[4805]: I1203 00:24:00.965735 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/85bc1596-4e47-449d-87d2-73a239ae8070-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:24:01 crc kubenswrapper[4805]: I1203 00:24:01.004318 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_85bc1596-4e47-449d-87d2-73a239ae8070/docker-build/0.log" Dec 03 00:24:01 crc kubenswrapper[4805]: I1203 00:24:01.004877 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"85bc1596-4e47-449d-87d2-73a239ae8070","Type":"ContainerDied","Data":"9690152efde7f1acbd0b68973775c2d6d9b954bf34cdc94218d692af61a486c0"} Dec 03 00:24:01 crc kubenswrapper[4805]: I1203 00:24:01.004955 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Dec 03 00:24:01 crc kubenswrapper[4805]: I1203 00:24:01.006095 4805 scope.go:117] "RemoveContainer" containerID="04df8f646e58b1a5b480c09cebf689e700a22bf0ae58ca17a725f583ab2fd89e" Dec 03 00:24:01 crc kubenswrapper[4805]: I1203 00:24:01.006432 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"ac869363-c9e0-45a2-b19c-8c01d1598f66","Type":"ContainerStarted","Data":"9d7ff6d3c1e2a8f92c9101941556404f4f1ed6ef0faa4b22db74c4b1d07d87f6"} Dec 03 00:24:01 crc kubenswrapper[4805]: I1203 00:24:01.052328 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 03 00:24:01 crc kubenswrapper[4805]: I1203 00:24:01.059095 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 03 00:24:01 crc kubenswrapper[4805]: I1203 00:24:01.085304 4805 scope.go:117] "RemoveContainer" containerID="2bdf512d2f02db8349b59416ac393caad6872fe8445849f05c638230030cd86e" Dec 03 00:24:02 crc kubenswrapper[4805]: I1203 00:24:02.015997 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"ac869363-c9e0-45a2-b19c-8c01d1598f66","Type":"ContainerStarted","Data":"5e695762c7931048cf4a1aed2d85980df421d25fadee6805a2aabca2176aa282"} Dec 03 00:24:02 crc kubenswrapper[4805]: I1203 00:24:02.432868 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85bc1596-4e47-449d-87d2-73a239ae8070" path="/var/lib/kubelet/pods/85bc1596-4e47-449d-87d2-73a239ae8070/volumes" Dec 03 00:24:03 crc kubenswrapper[4805]: I1203 00:24:03.028587 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac869363-c9e0-45a2-b19c-8c01d1598f66" containerID="5e695762c7931048cf4a1aed2d85980df421d25fadee6805a2aabca2176aa282" exitCode=0 Dec 03 00:24:03 crc kubenswrapper[4805]: I1203 00:24:03.028654 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"ac869363-c9e0-45a2-b19c-8c01d1598f66","Type":"ContainerDied","Data":"5e695762c7931048cf4a1aed2d85980df421d25fadee6805a2aabca2176aa282"} Dec 03 00:24:04 crc kubenswrapper[4805]: I1203 00:24:04.042427 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac869363-c9e0-45a2-b19c-8c01d1598f66" containerID="31596ab076eedfe60e5a17d459b2df913f4c3a80ced6ac71197cccef6624ef17" exitCode=0 Dec 03 00:24:04 crc kubenswrapper[4805]: I1203 00:24:04.042521 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"ac869363-c9e0-45a2-b19c-8c01d1598f66","Type":"ContainerDied","Data":"31596ab076eedfe60e5a17d459b2df913f4c3a80ced6ac71197cccef6624ef17"} Dec 03 00:24:04 crc kubenswrapper[4805]: I1203 00:24:04.083583 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_ac869363-c9e0-45a2-b19c-8c01d1598f66/manage-dockerfile/0.log" Dec 03 00:24:05 crc kubenswrapper[4805]: I1203 00:24:05.055606 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"ac869363-c9e0-45a2-b19c-8c01d1598f66","Type":"ContainerStarted","Data":"63686b602c6eff28cbd19255cf6afe4e543b4aa2c9c3d1f86ebe4896098099d9"} Dec 03 00:24:05 crc kubenswrapper[4805]: I1203 00:24:05.100349 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=5.100316363 podStartE2EDuration="5.100316363s" podCreationTimestamp="2025-12-03 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:24:05.091036899 +0000 UTC m=+1068.939999565" watchObservedRunningTime="2025-12-03 00:24:05.100316363 +0000 UTC m=+1068.949278979" Dec 03 00:25:32 crc kubenswrapper[4805]: I1203 00:25:32.778061 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac869363-c9e0-45a2-b19c-8c01d1598f66" containerID="63686b602c6eff28cbd19255cf6afe4e543b4aa2c9c3d1f86ebe4896098099d9" exitCode=0 Dec 03 00:25:32 crc kubenswrapper[4805]: I1203 00:25:32.778150 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"ac869363-c9e0-45a2-b19c-8c01d1598f66","Type":"ContainerDied","Data":"63686b602c6eff28cbd19255cf6afe4e543b4aa2c9c3d1f86ebe4896098099d9"} Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.196572 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.297613 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-container-storage-root\") pod \"ac869363-c9e0-45a2-b19c-8c01d1598f66\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.297684 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6tlp\" (UniqueName: \"kubernetes.io/projected/ac869363-c9e0-45a2-b19c-8c01d1598f66-kube-api-access-v6tlp\") pod \"ac869363-c9e0-45a2-b19c-8c01d1598f66\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.297729 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-buildworkdir\") pod \"ac869363-c9e0-45a2-b19c-8c01d1598f66\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.297774 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ac869363-c9e0-45a2-b19c-8c01d1598f66-node-pullsecrets\") pod \"ac869363-c9e0-45a2-b19c-8c01d1598f66\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.297801 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/ac869363-c9e0-45a2-b19c-8c01d1598f66-builder-dockercfg-fwd7j-push\") pod \"ac869363-c9e0-45a2-b19c-8c01d1598f66\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.297821 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/ac869363-c9e0-45a2-b19c-8c01d1598f66-builder-dockercfg-fwd7j-pull\") pod \"ac869363-c9e0-45a2-b19c-8c01d1598f66\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.297852 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-blob-cache\") pod \"ac869363-c9e0-45a2-b19c-8c01d1598f66\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.297886 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-proxy-ca-bundles\") pod \"ac869363-c9e0-45a2-b19c-8c01d1598f66\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.297920 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-ca-bundles\") pod \"ac869363-c9e0-45a2-b19c-8c01d1598f66\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.297938 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ac869363-c9e0-45a2-b19c-8c01d1598f66-buildcachedir\") pod \"ac869363-c9e0-45a2-b19c-8c01d1598f66\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.297963 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-system-configs\") pod \"ac869363-c9e0-45a2-b19c-8c01d1598f66\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.298007 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-container-storage-run\") pod \"ac869363-c9e0-45a2-b19c-8c01d1598f66\" (UID: \"ac869363-c9e0-45a2-b19c-8c01d1598f66\") " Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.298586 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac869363-c9e0-45a2-b19c-8c01d1598f66-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "ac869363-c9e0-45a2-b19c-8c01d1598f66" (UID: "ac869363-c9e0-45a2-b19c-8c01d1598f66"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.299440 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "ac869363-c9e0-45a2-b19c-8c01d1598f66" (UID: "ac869363-c9e0-45a2-b19c-8c01d1598f66"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.300168 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac869363-c9e0-45a2-b19c-8c01d1598f66-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "ac869363-c9e0-45a2-b19c-8c01d1598f66" (UID: "ac869363-c9e0-45a2-b19c-8c01d1598f66"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.300788 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "ac869363-c9e0-45a2-b19c-8c01d1598f66" (UID: "ac869363-c9e0-45a2-b19c-8c01d1598f66"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.300795 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "ac869363-c9e0-45a2-b19c-8c01d1598f66" (UID: "ac869363-c9e0-45a2-b19c-8c01d1598f66"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.301237 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "ac869363-c9e0-45a2-b19c-8c01d1598f66" (UID: "ac869363-c9e0-45a2-b19c-8c01d1598f66"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.305447 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac869363-c9e0-45a2-b19c-8c01d1598f66-builder-dockercfg-fwd7j-pull" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-pull") pod "ac869363-c9e0-45a2-b19c-8c01d1598f66" (UID: "ac869363-c9e0-45a2-b19c-8c01d1598f66"). InnerVolumeSpecName "builder-dockercfg-fwd7j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.309032 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac869363-c9e0-45a2-b19c-8c01d1598f66-builder-dockercfg-fwd7j-push" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-push") pod "ac869363-c9e0-45a2-b19c-8c01d1598f66" (UID: "ac869363-c9e0-45a2-b19c-8c01d1598f66"). InnerVolumeSpecName "builder-dockercfg-fwd7j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.309661 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac869363-c9e0-45a2-b19c-8c01d1598f66-kube-api-access-v6tlp" (OuterVolumeSpecName: "kube-api-access-v6tlp") pod "ac869363-c9e0-45a2-b19c-8c01d1598f66" (UID: "ac869363-c9e0-45a2-b19c-8c01d1598f66"). InnerVolumeSpecName "kube-api-access-v6tlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.324705 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "ac869363-c9e0-45a2-b19c-8c01d1598f66" (UID: "ac869363-c9e0-45a2-b19c-8c01d1598f66"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.399489 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/ac869363-c9e0-45a2-b19c-8c01d1598f66-builder-dockercfg-fwd7j-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.399569 4805 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.399585 4805 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.399597 4805 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ac869363-c9e0-45a2-b19c-8c01d1598f66-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.399609 4805 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.399621 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.399705 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6tlp\" (UniqueName: \"kubernetes.io/projected/ac869363-c9e0-45a2-b19c-8c01d1598f66-kube-api-access-v6tlp\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.399720 4805 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.399750 4805 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ac869363-c9e0-45a2-b19c-8c01d1598f66-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.399774 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/ac869363-c9e0-45a2-b19c-8c01d1598f66-builder-dockercfg-fwd7j-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.529655 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "ac869363-c9e0-45a2-b19c-8c01d1598f66" (UID: "ac869363-c9e0-45a2-b19c-8c01d1598f66"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.602426 4805 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.798744 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"ac869363-c9e0-45a2-b19c-8c01d1598f66","Type":"ContainerDied","Data":"9d7ff6d3c1e2a8f92c9101941556404f4f1ed6ef0faa4b22db74c4b1d07d87f6"} Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.798788 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d7ff6d3c1e2a8f92c9101941556404f4f1ed6ef0faa4b22db74c4b1d07d87f6" Dec 03 00:25:34 crc kubenswrapper[4805]: I1203 00:25:34.798864 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Dec 03 00:25:36 crc kubenswrapper[4805]: I1203 00:25:36.773784 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "ac869363-c9e0-45a2-b19c-8c01d1598f66" (UID: "ac869363-c9e0-45a2-b19c-8c01d1598f66"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:25:36 crc kubenswrapper[4805]: I1203 00:25:36.850834 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ac869363-c9e0-45a2-b19c-8c01d1598f66-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.221147 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 03 00:25:39 crc kubenswrapper[4805]: E1203 00:25:39.221510 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac869363-c9e0-45a2-b19c-8c01d1598f66" containerName="docker-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.221528 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac869363-c9e0-45a2-b19c-8c01d1598f66" containerName="docker-build" Dec 03 00:25:39 crc kubenswrapper[4805]: E1203 00:25:39.221544 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac869363-c9e0-45a2-b19c-8c01d1598f66" containerName="manage-dockerfile" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.221552 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac869363-c9e0-45a2-b19c-8c01d1598f66" containerName="manage-dockerfile" Dec 03 00:25:39 crc kubenswrapper[4805]: E1203 00:25:39.221582 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac869363-c9e0-45a2-b19c-8c01d1598f66" containerName="git-clone" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.221589 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac869363-c9e0-45a2-b19c-8c01d1598f66" containerName="git-clone" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.221735 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac869363-c9e0-45a2-b19c-8c01d1598f66" containerName="docker-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.222501 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.224790 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-sys-config" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.224817 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-ca" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.230214 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-global-ca" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.233149 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-fwd7j" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.236259 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.285009 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dbf7df6a-1606-4664-a9f2-1562309d3c2a-buildcachedir\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.285076 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/dbf7df6a-1606-4664-a9f2-1562309d3c2a-builder-dockercfg-fwd7j-pull\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.285106 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-system-configs\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.285220 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.285334 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsjgb\" (UniqueName: \"kubernetes.io/projected/dbf7df6a-1606-4664-a9f2-1562309d3c2a-kube-api-access-tsjgb\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.285358 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/dbf7df6a-1606-4664-a9f2-1562309d3c2a-builder-dockercfg-fwd7j-push\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.285412 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.285490 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.285526 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dbf7df6a-1606-4664-a9f2-1562309d3c2a-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.285557 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-buildworkdir\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.285572 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-container-storage-root\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.285595 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-container-storage-run\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.387161 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/dbf7df6a-1606-4664-a9f2-1562309d3c2a-builder-dockercfg-fwd7j-pull\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.387261 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-system-configs\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.387300 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.387356 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsjgb\" (UniqueName: \"kubernetes.io/projected/dbf7df6a-1606-4664-a9f2-1562309d3c2a-kube-api-access-tsjgb\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.387383 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/dbf7df6a-1606-4664-a9f2-1562309d3c2a-builder-dockercfg-fwd7j-push\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.387412 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.387455 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.387474 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dbf7df6a-1606-4664-a9f2-1562309d3c2a-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.387494 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-buildworkdir\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.387511 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-container-storage-root\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.387532 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-container-storage-run\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.387552 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dbf7df6a-1606-4664-a9f2-1562309d3c2a-buildcachedir\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.387638 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dbf7df6a-1606-4664-a9f2-1562309d3c2a-buildcachedir\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.387783 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.387801 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dbf7df6a-1606-4664-a9f2-1562309d3c2a-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.387843 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-buildworkdir\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.388247 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-container-storage-run\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.388295 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-container-storage-root\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.388549 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.388944 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.655034 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-system-configs\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.655414 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/dbf7df6a-1606-4664-a9f2-1562309d3c2a-builder-dockercfg-fwd7j-pull\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.655520 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/dbf7df6a-1606-4664-a9f2-1562309d3c2a-builder-dockercfg-fwd7j-push\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.657822 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsjgb\" (UniqueName: \"kubernetes.io/projected/dbf7df6a-1606-4664-a9f2-1562309d3c2a-kube-api-access-tsjgb\") pod \"sg-core-1-build\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " pod="service-telemetry/sg-core-1-build" Dec 03 00:25:39 crc kubenswrapper[4805]: I1203 00:25:39.849024 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Dec 03 00:25:40 crc kubenswrapper[4805]: I1203 00:25:40.102245 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 03 00:25:40 crc kubenswrapper[4805]: I1203 00:25:40.837323 4805 generic.go:334] "Generic (PLEG): container finished" podID="dbf7df6a-1606-4664-a9f2-1562309d3c2a" containerID="c5a5deddf6c3135e8786e7763a3d18aa0176ea3cfc1e84020852b358b05c2f61" exitCode=0 Dec 03 00:25:40 crc kubenswrapper[4805]: I1203 00:25:40.837373 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"dbf7df6a-1606-4664-a9f2-1562309d3c2a","Type":"ContainerDied","Data":"c5a5deddf6c3135e8786e7763a3d18aa0176ea3cfc1e84020852b358b05c2f61"} Dec 03 00:25:40 crc kubenswrapper[4805]: I1203 00:25:40.837626 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"dbf7df6a-1606-4664-a9f2-1562309d3c2a","Type":"ContainerStarted","Data":"2fdbd0e9b1cecddc35cf7de424efc6a2184ceefbab16c8323242e470f4a020b2"} Dec 03 00:25:41 crc kubenswrapper[4805]: I1203 00:25:41.845658 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"dbf7df6a-1606-4664-a9f2-1562309d3c2a","Type":"ContainerStarted","Data":"7ff3efaf039bf5db90b260663af5a9eff98137c545a808fec5d9107420035710"} Dec 03 00:25:41 crc kubenswrapper[4805]: I1203 00:25:41.871799 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=2.871780066 podStartE2EDuration="2.871780066s" podCreationTimestamp="2025-12-03 00:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:25:41.870820717 +0000 UTC m=+1165.719783363" watchObservedRunningTime="2025-12-03 00:25:41.871780066 +0000 UTC m=+1165.720742682" Dec 03 00:25:49 crc kubenswrapper[4805]: I1203 00:25:49.499310 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 03 00:25:49 crc kubenswrapper[4805]: I1203 00:25:49.500727 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="dbf7df6a-1606-4664-a9f2-1562309d3c2a" containerName="docker-build" containerID="cri-o://7ff3efaf039bf5db90b260663af5a9eff98137c545a808fec5d9107420035710" gracePeriod=30 Dec 03 00:25:49 crc kubenswrapper[4805]: I1203 00:25:49.901320 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_dbf7df6a-1606-4664-a9f2-1562309d3c2a/docker-build/0.log" Dec 03 00:25:49 crc kubenswrapper[4805]: I1203 00:25:49.901778 4805 generic.go:334] "Generic (PLEG): container finished" podID="dbf7df6a-1606-4664-a9f2-1562309d3c2a" containerID="7ff3efaf039bf5db90b260663af5a9eff98137c545a808fec5d9107420035710" exitCode=1 Dec 03 00:25:49 crc kubenswrapper[4805]: I1203 00:25:49.901815 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"dbf7df6a-1606-4664-a9f2-1562309d3c2a","Type":"ContainerDied","Data":"7ff3efaf039bf5db90b260663af5a9eff98137c545a808fec5d9107420035710"} Dec 03 00:25:49 crc kubenswrapper[4805]: I1203 00:25:49.901840 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"dbf7df6a-1606-4664-a9f2-1562309d3c2a","Type":"ContainerDied","Data":"2fdbd0e9b1cecddc35cf7de424efc6a2184ceefbab16c8323242e470f4a020b2"} Dec 03 00:25:49 crc kubenswrapper[4805]: I1203 00:25:49.901849 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fdbd0e9b1cecddc35cf7de424efc6a2184ceefbab16c8323242e470f4a020b2" Dec 03 00:25:49 crc kubenswrapper[4805]: I1203 00:25:49.902277 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_dbf7df6a-1606-4664-a9f2-1562309d3c2a/docker-build/0.log" Dec 03 00:25:49 crc kubenswrapper[4805]: I1203 00:25:49.902774 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.045787 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dbf7df6a-1606-4664-a9f2-1562309d3c2a-node-pullsecrets\") pod \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.045876 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-system-configs\") pod \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.045915 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsjgb\" (UniqueName: \"kubernetes.io/projected/dbf7df6a-1606-4664-a9f2-1562309d3c2a-kube-api-access-tsjgb\") pod \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.045943 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-container-storage-run\") pod \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.045938 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbf7df6a-1606-4664-a9f2-1562309d3c2a-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "dbf7df6a-1606-4664-a9f2-1562309d3c2a" (UID: "dbf7df6a-1606-4664-a9f2-1562309d3c2a"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.045986 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/dbf7df6a-1606-4664-a9f2-1562309d3c2a-builder-dockercfg-fwd7j-push\") pod \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.046010 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/dbf7df6a-1606-4664-a9f2-1562309d3c2a-builder-dockercfg-fwd7j-pull\") pod \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.046031 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dbf7df6a-1606-4664-a9f2-1562309d3c2a-buildcachedir\") pod \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.046088 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-ca-bundles\") pod \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.046119 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-blob-cache\") pod \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.046136 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-container-storage-root\") pod \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.046163 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-buildworkdir\") pod \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.046233 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-proxy-ca-bundles\") pod \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\" (UID: \"dbf7df6a-1606-4664-a9f2-1562309d3c2a\") " Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.046387 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbf7df6a-1606-4664-a9f2-1562309d3c2a-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "dbf7df6a-1606-4664-a9f2-1562309d3c2a" (UID: "dbf7df6a-1606-4664-a9f2-1562309d3c2a"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.046785 4805 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dbf7df6a-1606-4664-a9f2-1562309d3c2a-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.046813 4805 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dbf7df6a-1606-4664-a9f2-1562309d3c2a-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.047169 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "dbf7df6a-1606-4664-a9f2-1562309d3c2a" (UID: "dbf7df6a-1606-4664-a9f2-1562309d3c2a"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.047389 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "dbf7df6a-1606-4664-a9f2-1562309d3c2a" (UID: "dbf7df6a-1606-4664-a9f2-1562309d3c2a"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.048952 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "dbf7df6a-1606-4664-a9f2-1562309d3c2a" (UID: "dbf7df6a-1606-4664-a9f2-1562309d3c2a"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.049055 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "dbf7df6a-1606-4664-a9f2-1562309d3c2a" (UID: "dbf7df6a-1606-4664-a9f2-1562309d3c2a"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.049399 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "dbf7df6a-1606-4664-a9f2-1562309d3c2a" (UID: "dbf7df6a-1606-4664-a9f2-1562309d3c2a"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.060503 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbf7df6a-1606-4664-a9f2-1562309d3c2a-builder-dockercfg-fwd7j-pull" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-pull") pod "dbf7df6a-1606-4664-a9f2-1562309d3c2a" (UID: "dbf7df6a-1606-4664-a9f2-1562309d3c2a"). InnerVolumeSpecName "builder-dockercfg-fwd7j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.060529 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbf7df6a-1606-4664-a9f2-1562309d3c2a-kube-api-access-tsjgb" (OuterVolumeSpecName: "kube-api-access-tsjgb") pod "dbf7df6a-1606-4664-a9f2-1562309d3c2a" (UID: "dbf7df6a-1606-4664-a9f2-1562309d3c2a"). InnerVolumeSpecName "kube-api-access-tsjgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.063771 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbf7df6a-1606-4664-a9f2-1562309d3c2a-builder-dockercfg-fwd7j-push" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-push") pod "dbf7df6a-1606-4664-a9f2-1562309d3c2a" (UID: "dbf7df6a-1606-4664-a9f2-1562309d3c2a"). InnerVolumeSpecName "builder-dockercfg-fwd7j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.141262 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "dbf7df6a-1606-4664-a9f2-1562309d3c2a" (UID: "dbf7df6a-1606-4664-a9f2-1562309d3c2a"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.149097 4805 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.149131 4805 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.149143 4805 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.149151 4805 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.149161 4805 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dbf7df6a-1606-4664-a9f2-1562309d3c2a-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.149170 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsjgb\" (UniqueName: \"kubernetes.io/projected/dbf7df6a-1606-4664-a9f2-1562309d3c2a-kube-api-access-tsjgb\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.149178 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.149186 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/dbf7df6a-1606-4664-a9f2-1562309d3c2a-builder-dockercfg-fwd7j-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.149209 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/dbf7df6a-1606-4664-a9f2-1562309d3c2a-builder-dockercfg-fwd7j-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.196237 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "dbf7df6a-1606-4664-a9f2-1562309d3c2a" (UID: "dbf7df6a-1606-4664-a9f2-1562309d3c2a"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.250741 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dbf7df6a-1606-4664-a9f2-1562309d3c2a-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.907564 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.939328 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 03 00:25:50 crc kubenswrapper[4805]: I1203 00:25:50.949137 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.163583 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Dec 03 00:25:51 crc kubenswrapper[4805]: E1203 00:25:51.164233 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf7df6a-1606-4664-a9f2-1562309d3c2a" containerName="manage-dockerfile" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.164319 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf7df6a-1606-4664-a9f2-1562309d3c2a" containerName="manage-dockerfile" Dec 03 00:25:51 crc kubenswrapper[4805]: E1203 00:25:51.164432 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf7df6a-1606-4664-a9f2-1562309d3c2a" containerName="docker-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.164512 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf7df6a-1606-4664-a9f2-1562309d3c2a" containerName="docker-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.164731 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbf7df6a-1606-4664-a9f2-1562309d3c2a" containerName="docker-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.165866 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.171379 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-sys-config" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.171543 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-fwd7j" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.171713 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-ca" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.172729 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-global-ca" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.198602 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.264467 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/da215de3-b074-413a-8a40-b90ecbec7306-build-system-configs\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.264513 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/da215de3-b074-413a-8a40-b90ecbec7306-builder-dockercfg-fwd7j-push\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.264545 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-buildworkdir\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.264573 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn8fp\" (UniqueName: \"kubernetes.io/projected/da215de3-b074-413a-8a40-b90ecbec7306-kube-api-access-hn8fp\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.264596 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da215de3-b074-413a-8a40-b90ecbec7306-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.264621 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/da215de3-b074-413a-8a40-b90ecbec7306-buildcachedir\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.264639 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/da215de3-b074-413a-8a40-b90ecbec7306-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.264655 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/da215de3-b074-413a-8a40-b90ecbec7306-builder-dockercfg-fwd7j-pull\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.264676 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-container-storage-root\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.264706 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-container-storage-run\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.264728 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da215de3-b074-413a-8a40-b90ecbec7306-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.264750 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.366131 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-container-storage-run\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.366192 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da215de3-b074-413a-8a40-b90ecbec7306-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.366235 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.366251 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/da215de3-b074-413a-8a40-b90ecbec7306-build-system-configs\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.366271 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/da215de3-b074-413a-8a40-b90ecbec7306-builder-dockercfg-fwd7j-push\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.366298 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-buildworkdir\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.366320 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn8fp\" (UniqueName: \"kubernetes.io/projected/da215de3-b074-413a-8a40-b90ecbec7306-kube-api-access-hn8fp\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.366344 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da215de3-b074-413a-8a40-b90ecbec7306-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.366370 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/da215de3-b074-413a-8a40-b90ecbec7306-buildcachedir\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.366388 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/da215de3-b074-413a-8a40-b90ecbec7306-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.366407 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/da215de3-b074-413a-8a40-b90ecbec7306-builder-dockercfg-fwd7j-pull\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.366430 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-container-storage-root\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.366970 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/da215de3-b074-413a-8a40-b90ecbec7306-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.367380 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.367375 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/da215de3-b074-413a-8a40-b90ecbec7306-buildcachedir\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.367423 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-buildworkdir\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.367553 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-container-storage-root\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.367849 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/da215de3-b074-413a-8a40-b90ecbec7306-build-system-configs\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.368125 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-container-storage-run\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.368296 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da215de3-b074-413a-8a40-b90ecbec7306-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.368696 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da215de3-b074-413a-8a40-b90ecbec7306-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.381059 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/da215de3-b074-413a-8a40-b90ecbec7306-builder-dockercfg-fwd7j-pull\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.382488 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/da215de3-b074-413a-8a40-b90ecbec7306-builder-dockercfg-fwd7j-push\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.389972 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn8fp\" (UniqueName: \"kubernetes.io/projected/da215de3-b074-413a-8a40-b90ecbec7306-kube-api-access-hn8fp\") pod \"sg-core-2-build\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.480499 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.723709 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Dec 03 00:25:51 crc kubenswrapper[4805]: I1203 00:25:51.918893 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"da215de3-b074-413a-8a40-b90ecbec7306","Type":"ContainerStarted","Data":"87c00cb688f300a5fcfa368677541b765235c49dba895b0d5bb8cf8d000587c3"} Dec 03 00:25:52 crc kubenswrapper[4805]: I1203 00:25:52.433027 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbf7df6a-1606-4664-a9f2-1562309d3c2a" path="/var/lib/kubelet/pods/dbf7df6a-1606-4664-a9f2-1562309d3c2a/volumes" Dec 03 00:25:52 crc kubenswrapper[4805]: I1203 00:25:52.927601 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"da215de3-b074-413a-8a40-b90ecbec7306","Type":"ContainerStarted","Data":"934d09d8a7b30893fcc1fd3fae9bc67d19b59fcc851ff4b42dbfce11f98ac2d3"} Dec 03 00:25:53 crc kubenswrapper[4805]: I1203 00:25:53.945544 4805 generic.go:334] "Generic (PLEG): container finished" podID="da215de3-b074-413a-8a40-b90ecbec7306" containerID="934d09d8a7b30893fcc1fd3fae9bc67d19b59fcc851ff4b42dbfce11f98ac2d3" exitCode=0 Dec 03 00:25:53 crc kubenswrapper[4805]: I1203 00:25:53.946085 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"da215de3-b074-413a-8a40-b90ecbec7306","Type":"ContainerDied","Data":"934d09d8a7b30893fcc1fd3fae9bc67d19b59fcc851ff4b42dbfce11f98ac2d3"} Dec 03 00:25:54 crc kubenswrapper[4805]: I1203 00:25:54.956687 4805 generic.go:334] "Generic (PLEG): container finished" podID="da215de3-b074-413a-8a40-b90ecbec7306" containerID="ff901b3af597dc868f6548f851076cb3685e46b45189d57196ccfbdc77e944e9" exitCode=0 Dec 03 00:25:54 crc kubenswrapper[4805]: I1203 00:25:54.957279 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"da215de3-b074-413a-8a40-b90ecbec7306","Type":"ContainerDied","Data":"ff901b3af597dc868f6548f851076cb3685e46b45189d57196ccfbdc77e944e9"} Dec 03 00:25:55 crc kubenswrapper[4805]: I1203 00:25:55.001219 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_da215de3-b074-413a-8a40-b90ecbec7306/manage-dockerfile/0.log" Dec 03 00:25:55 crc kubenswrapper[4805]: I1203 00:25:55.967516 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"da215de3-b074-413a-8a40-b90ecbec7306","Type":"ContainerStarted","Data":"99fb676caedf62ff23990be4f874db79c299e65ed9de91a88e8ea55bdde4d0ff"} Dec 03 00:25:56 crc kubenswrapper[4805]: I1203 00:25:56.006559 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=5.006519156 podStartE2EDuration="5.006519156s" podCreationTimestamp="2025-12-03 00:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:25:55.995091649 +0000 UTC m=+1179.844054335" watchObservedRunningTime="2025-12-03 00:25:56.006519156 +0000 UTC m=+1179.855481812" Dec 03 00:26:17 crc kubenswrapper[4805]: I1203 00:26:17.811410 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:26:17 crc kubenswrapper[4805]: I1203 00:26:17.812079 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:26:47 crc kubenswrapper[4805]: I1203 00:26:47.811685 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:26:47 crc kubenswrapper[4805]: I1203 00:26:47.812146 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:27:17 crc kubenswrapper[4805]: I1203 00:27:17.811535 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:27:17 crc kubenswrapper[4805]: I1203 00:27:17.812328 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:27:17 crc kubenswrapper[4805]: I1203 00:27:17.812419 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:27:17 crc kubenswrapper[4805]: I1203 00:27:17.813506 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6a15cf13acceb9a934b2889a2fbcde236ba182391e642017c50de23b8e1efc7"} pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:27:17 crc kubenswrapper[4805]: I1203 00:27:17.813611 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" containerID="cri-o://a6a15cf13acceb9a934b2889a2fbcde236ba182391e642017c50de23b8e1efc7" gracePeriod=600 Dec 03 00:27:18 crc kubenswrapper[4805]: I1203 00:27:18.547541 4805 generic.go:334] "Generic (PLEG): container finished" podID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerID="a6a15cf13acceb9a934b2889a2fbcde236ba182391e642017c50de23b8e1efc7" exitCode=0 Dec 03 00:27:18 crc kubenswrapper[4805]: I1203 00:27:18.547637 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" event={"ID":"42d6da4d-d781-4243-b5c3-28a8cf91ef53","Type":"ContainerDied","Data":"a6a15cf13acceb9a934b2889a2fbcde236ba182391e642017c50de23b8e1efc7"} Dec 03 00:27:18 crc kubenswrapper[4805]: I1203 00:27:18.548038 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" event={"ID":"42d6da4d-d781-4243-b5c3-28a8cf91ef53","Type":"ContainerStarted","Data":"c5bd9b5c258ecf356c62660c4f09c420d8d82addbe1860d108970f42116ef5be"} Dec 03 00:27:18 crc kubenswrapper[4805]: I1203 00:27:18.548071 4805 scope.go:117] "RemoveContainer" containerID="338a7cc0895913e63346fa19d3b54987d60ec7ea8c91dbfbecd7727b32876440" Dec 03 00:29:23 crc kubenswrapper[4805]: I1203 00:29:23.589457 4805 generic.go:334] "Generic (PLEG): container finished" podID="da215de3-b074-413a-8a40-b90ecbec7306" containerID="99fb676caedf62ff23990be4f874db79c299e65ed9de91a88e8ea55bdde4d0ff" exitCode=0 Dec 03 00:29:23 crc kubenswrapper[4805]: I1203 00:29:23.589533 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"da215de3-b074-413a-8a40-b90ecbec7306","Type":"ContainerDied","Data":"99fb676caedf62ff23990be4f874db79c299e65ed9de91a88e8ea55bdde4d0ff"} Dec 03 00:29:24 crc kubenswrapper[4805]: I1203 00:29:24.970773 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.065042 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-container-storage-root\") pod \"da215de3-b074-413a-8a40-b90ecbec7306\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.065118 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/da215de3-b074-413a-8a40-b90ecbec7306-builder-dockercfg-fwd7j-pull\") pod \"da215de3-b074-413a-8a40-b90ecbec7306\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.065163 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-build-blob-cache\") pod \"da215de3-b074-413a-8a40-b90ecbec7306\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.065191 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/da215de3-b074-413a-8a40-b90ecbec7306-builder-dockercfg-fwd7j-push\") pod \"da215de3-b074-413a-8a40-b90ecbec7306\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.065312 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/da215de3-b074-413a-8a40-b90ecbec7306-buildcachedir\") pod \"da215de3-b074-413a-8a40-b90ecbec7306\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.065346 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/da215de3-b074-413a-8a40-b90ecbec7306-node-pullsecrets\") pod \"da215de3-b074-413a-8a40-b90ecbec7306\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.065366 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-container-storage-run\") pod \"da215de3-b074-413a-8a40-b90ecbec7306\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.065389 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-buildworkdir\") pod \"da215de3-b074-413a-8a40-b90ecbec7306\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.065440 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/da215de3-b074-413a-8a40-b90ecbec7306-build-system-configs\") pod \"da215de3-b074-413a-8a40-b90ecbec7306\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.065415 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da215de3-b074-413a-8a40-b90ecbec7306-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "da215de3-b074-413a-8a40-b90ecbec7306" (UID: "da215de3-b074-413a-8a40-b90ecbec7306"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.065484 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn8fp\" (UniqueName: \"kubernetes.io/projected/da215de3-b074-413a-8a40-b90ecbec7306-kube-api-access-hn8fp\") pod \"da215de3-b074-413a-8a40-b90ecbec7306\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.065520 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da215de3-b074-413a-8a40-b90ecbec7306-build-proxy-ca-bundles\") pod \"da215de3-b074-413a-8a40-b90ecbec7306\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.065567 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da215de3-b074-413a-8a40-b90ecbec7306-build-ca-bundles\") pod \"da215de3-b074-413a-8a40-b90ecbec7306\" (UID: \"da215de3-b074-413a-8a40-b90ecbec7306\") " Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.066008 4805 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/da215de3-b074-413a-8a40-b90ecbec7306-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.065450 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da215de3-b074-413a-8a40-b90ecbec7306-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "da215de3-b074-413a-8a40-b90ecbec7306" (UID: "da215de3-b074-413a-8a40-b90ecbec7306"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.066283 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da215de3-b074-413a-8a40-b90ecbec7306-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "da215de3-b074-413a-8a40-b90ecbec7306" (UID: "da215de3-b074-413a-8a40-b90ecbec7306"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.066535 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da215de3-b074-413a-8a40-b90ecbec7306-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "da215de3-b074-413a-8a40-b90ecbec7306" (UID: "da215de3-b074-413a-8a40-b90ecbec7306"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.067028 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "da215de3-b074-413a-8a40-b90ecbec7306" (UID: "da215de3-b074-413a-8a40-b90ecbec7306"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.067225 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da215de3-b074-413a-8a40-b90ecbec7306-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "da215de3-b074-413a-8a40-b90ecbec7306" (UID: "da215de3-b074-413a-8a40-b90ecbec7306"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.073377 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da215de3-b074-413a-8a40-b90ecbec7306-builder-dockercfg-fwd7j-push" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-push") pod "da215de3-b074-413a-8a40-b90ecbec7306" (UID: "da215de3-b074-413a-8a40-b90ecbec7306"). InnerVolumeSpecName "builder-dockercfg-fwd7j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.073427 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da215de3-b074-413a-8a40-b90ecbec7306-builder-dockercfg-fwd7j-pull" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-pull") pod "da215de3-b074-413a-8a40-b90ecbec7306" (UID: "da215de3-b074-413a-8a40-b90ecbec7306"). InnerVolumeSpecName "builder-dockercfg-fwd7j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.074437 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da215de3-b074-413a-8a40-b90ecbec7306-kube-api-access-hn8fp" (OuterVolumeSpecName: "kube-api-access-hn8fp") pod "da215de3-b074-413a-8a40-b90ecbec7306" (UID: "da215de3-b074-413a-8a40-b90ecbec7306"). InnerVolumeSpecName "kube-api-access-hn8fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.078267 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "da215de3-b074-413a-8a40-b90ecbec7306" (UID: "da215de3-b074-413a-8a40-b90ecbec7306"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.169632 4805 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da215de3-b074-413a-8a40-b90ecbec7306-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.169686 4805 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da215de3-b074-413a-8a40-b90ecbec7306-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.169709 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/da215de3-b074-413a-8a40-b90ecbec7306-builder-dockercfg-fwd7j-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.169727 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/da215de3-b074-413a-8a40-b90ecbec7306-builder-dockercfg-fwd7j-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.169745 4805 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/da215de3-b074-413a-8a40-b90ecbec7306-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.169764 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.169783 4805 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.169801 4805 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/da215de3-b074-413a-8a40-b90ecbec7306-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.169819 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn8fp\" (UniqueName: \"kubernetes.io/projected/da215de3-b074-413a-8a40-b90ecbec7306-kube-api-access-hn8fp\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.337356 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "da215de3-b074-413a-8a40-b90ecbec7306" (UID: "da215de3-b074-413a-8a40-b90ecbec7306"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.373080 4805 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.616526 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"da215de3-b074-413a-8a40-b90ecbec7306","Type":"ContainerDied","Data":"87c00cb688f300a5fcfa368677541b765235c49dba895b0d5bb8cf8d000587c3"} Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.616569 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87c00cb688f300a5fcfa368677541b765235c49dba895b0d5bb8cf8d000587c3" Dec 03 00:29:25 crc kubenswrapper[4805]: I1203 00:29:25.616641 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Dec 03 00:29:27 crc kubenswrapper[4805]: I1203 00:29:27.968289 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "da215de3-b074-413a-8a40-b90ecbec7306" (UID: "da215de3-b074-413a-8a40-b90ecbec7306"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:29:28 crc kubenswrapper[4805]: I1203 00:29:28.006970 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/da215de3-b074-413a-8a40-b90ecbec7306-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.192692 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 03 00:29:30 crc kubenswrapper[4805]: E1203 00:29:30.192992 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da215de3-b074-413a-8a40-b90ecbec7306" containerName="docker-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.193011 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="da215de3-b074-413a-8a40-b90ecbec7306" containerName="docker-build" Dec 03 00:29:30 crc kubenswrapper[4805]: E1203 00:29:30.193033 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da215de3-b074-413a-8a40-b90ecbec7306" containerName="manage-dockerfile" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.193041 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="da215de3-b074-413a-8a40-b90ecbec7306" containerName="manage-dockerfile" Dec 03 00:29:30 crc kubenswrapper[4805]: E1203 00:29:30.193060 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da215de3-b074-413a-8a40-b90ecbec7306" containerName="git-clone" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.193067 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="da215de3-b074-413a-8a40-b90ecbec7306" containerName="git-clone" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.193226 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="da215de3-b074-413a-8a40-b90ecbec7306" containerName="docker-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.194072 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.200347 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-global-ca" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.200407 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-sys-config" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.200485 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-ca" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.200495 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-fwd7j" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.209280 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.340406 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.340466 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-builder-dockercfg-fwd7j-pull\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.340501 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjnnn\" (UniqueName: \"kubernetes.io/projected/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-kube-api-access-zjnnn\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.340527 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.340569 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.340600 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.340622 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.340644 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.340681 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-builder-dockercfg-fwd7j-push\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.340704 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.340731 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.340764 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.441531 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.441580 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-builder-dockercfg-fwd7j-pull\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.441608 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjnnn\" (UniqueName: \"kubernetes.io/projected/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-kube-api-access-zjnnn\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.441630 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.441660 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.441684 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.441699 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.441713 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.441738 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-builder-dockercfg-fwd7j-push\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.441755 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.441772 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.441796 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.442034 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.442327 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.442507 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.442601 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.442646 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.443490 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.444639 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.444945 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.451802 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.452224 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-builder-dockercfg-fwd7j-push\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.459958 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-builder-dockercfg-fwd7j-pull\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.474812 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjnnn\" (UniqueName: \"kubernetes.io/projected/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-kube-api-access-zjnnn\") pod \"sg-bridge-1-build\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.511964 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:30 crc kubenswrapper[4805]: I1203 00:29:30.824235 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 03 00:29:31 crc kubenswrapper[4805]: I1203 00:29:31.670430 4805 generic.go:334] "Generic (PLEG): container finished" podID="e8a9bc97-b82a-4aa9-874d-43598b39f1d2" containerID="4b11d9737c68503daf09fbce25b25df23c893fd904cccbf4d8b5c5c081dd542d" exitCode=0 Dec 03 00:29:31 crc kubenswrapper[4805]: I1203 00:29:31.670498 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"e8a9bc97-b82a-4aa9-874d-43598b39f1d2","Type":"ContainerDied","Data":"4b11d9737c68503daf09fbce25b25df23c893fd904cccbf4d8b5c5c081dd542d"} Dec 03 00:29:31 crc kubenswrapper[4805]: I1203 00:29:31.670983 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"e8a9bc97-b82a-4aa9-874d-43598b39f1d2","Type":"ContainerStarted","Data":"f28c967e71135134aa7f1f49c390af8428c1a5e9e0687100070ab3bb0207c993"} Dec 03 00:29:32 crc kubenswrapper[4805]: I1203 00:29:32.679106 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"e8a9bc97-b82a-4aa9-874d-43598b39f1d2","Type":"ContainerStarted","Data":"3e0458a6db2680d9cca363e08aee4496ca10266ff48cb36a45d07be986344e9b"} Dec 03 00:29:32 crc kubenswrapper[4805]: I1203 00:29:32.704605 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=2.704580318 podStartE2EDuration="2.704580318s" podCreationTimestamp="2025-12-03 00:29:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:29:32.702101047 +0000 UTC m=+1396.551063643" watchObservedRunningTime="2025-12-03 00:29:32.704580318 +0000 UTC m=+1396.553542924" Dec 03 00:29:39 crc kubenswrapper[4805]: I1203 00:29:39.742432 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_e8a9bc97-b82a-4aa9-874d-43598b39f1d2/docker-build/0.log" Dec 03 00:29:39 crc kubenswrapper[4805]: I1203 00:29:39.743705 4805 generic.go:334] "Generic (PLEG): container finished" podID="e8a9bc97-b82a-4aa9-874d-43598b39f1d2" containerID="3e0458a6db2680d9cca363e08aee4496ca10266ff48cb36a45d07be986344e9b" exitCode=1 Dec 03 00:29:39 crc kubenswrapper[4805]: I1203 00:29:39.743771 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"e8a9bc97-b82a-4aa9-874d-43598b39f1d2","Type":"ContainerDied","Data":"3e0458a6db2680d9cca363e08aee4496ca10266ff48cb36a45d07be986344e9b"} Dec 03 00:29:40 crc kubenswrapper[4805]: I1203 00:29:40.176323 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.010321 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_e8a9bc97-b82a-4aa9-874d-43598b39f1d2/docker-build/0.log" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.011241 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.210610 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-builder-dockercfg-fwd7j-pull\") pod \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.210700 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-system-configs\") pod \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.210724 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-builder-dockercfg-fwd7j-push\") pod \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.210773 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-proxy-ca-bundles\") pod \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.210816 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-container-storage-root\") pod \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.210842 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-container-storage-run\") pod \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.210889 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjnnn\" (UniqueName: \"kubernetes.io/projected/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-kube-api-access-zjnnn\") pod \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.210912 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-node-pullsecrets\") pod \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.210929 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-ca-bundles\") pod \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.210966 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-blob-cache\") pod \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.210983 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-buildcachedir\") pod \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.211046 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-buildworkdir\") pod \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\" (UID: \"e8a9bc97-b82a-4aa9-874d-43598b39f1d2\") " Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.211074 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "e8a9bc97-b82a-4aa9-874d-43598b39f1d2" (UID: "e8a9bc97-b82a-4aa9-874d-43598b39f1d2"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.211323 4805 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.211560 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "e8a9bc97-b82a-4aa9-874d-43598b39f1d2" (UID: "e8a9bc97-b82a-4aa9-874d-43598b39f1d2"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.211977 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "e8a9bc97-b82a-4aa9-874d-43598b39f1d2" (UID: "e8a9bc97-b82a-4aa9-874d-43598b39f1d2"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.212029 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "e8a9bc97-b82a-4aa9-874d-43598b39f1d2" (UID: "e8a9bc97-b82a-4aa9-874d-43598b39f1d2"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.212098 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "e8a9bc97-b82a-4aa9-874d-43598b39f1d2" (UID: "e8a9bc97-b82a-4aa9-874d-43598b39f1d2"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.212140 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "e8a9bc97-b82a-4aa9-874d-43598b39f1d2" (UID: "e8a9bc97-b82a-4aa9-874d-43598b39f1d2"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.212991 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "e8a9bc97-b82a-4aa9-874d-43598b39f1d2" (UID: "e8a9bc97-b82a-4aa9-874d-43598b39f1d2"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.218436 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-builder-dockercfg-fwd7j-pull" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-pull") pod "e8a9bc97-b82a-4aa9-874d-43598b39f1d2" (UID: "e8a9bc97-b82a-4aa9-874d-43598b39f1d2"). InnerVolumeSpecName "builder-dockercfg-fwd7j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.218915 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-kube-api-access-zjnnn" (OuterVolumeSpecName: "kube-api-access-zjnnn") pod "e8a9bc97-b82a-4aa9-874d-43598b39f1d2" (UID: "e8a9bc97-b82a-4aa9-874d-43598b39f1d2"). InnerVolumeSpecName "kube-api-access-zjnnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.218979 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-builder-dockercfg-fwd7j-push" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-push") pod "e8a9bc97-b82a-4aa9-874d-43598b39f1d2" (UID: "e8a9bc97-b82a-4aa9-874d-43598b39f1d2"). InnerVolumeSpecName "builder-dockercfg-fwd7j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.287345 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "e8a9bc97-b82a-4aa9-874d-43598b39f1d2" (UID: "e8a9bc97-b82a-4aa9-874d-43598b39f1d2"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.312847 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjnnn\" (UniqueName: \"kubernetes.io/projected/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-kube-api-access-zjnnn\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.312883 4805 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.312894 4805 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.312904 4805 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.312918 4805 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.312930 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-builder-dockercfg-fwd7j-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.312941 4805 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.312951 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-builder-dockercfg-fwd7j-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.312961 4805 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.312971 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.605586 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "e8a9bc97-b82a-4aa9-874d-43598b39f1d2" (UID: "e8a9bc97-b82a-4aa9-874d-43598b39f1d2"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.617835 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e8a9bc97-b82a-4aa9-874d-43598b39f1d2-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.764041 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_e8a9bc97-b82a-4aa9-874d-43598b39f1d2/docker-build/0.log" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.764488 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"e8a9bc97-b82a-4aa9-874d-43598b39f1d2","Type":"ContainerDied","Data":"f28c967e71135134aa7f1f49c390af8428c1a5e9e0687100070ab3bb0207c993"} Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.764529 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f28c967e71135134aa7f1f49c390af8428c1a5e9e0687100070ab3bb0207c993" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.764552 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.812573 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.818531 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.855041 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Dec 03 00:29:41 crc kubenswrapper[4805]: E1203 00:29:41.855328 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8a9bc97-b82a-4aa9-874d-43598b39f1d2" containerName="manage-dockerfile" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.855345 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8a9bc97-b82a-4aa9-874d-43598b39f1d2" containerName="manage-dockerfile" Dec 03 00:29:41 crc kubenswrapper[4805]: E1203 00:29:41.855356 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8a9bc97-b82a-4aa9-874d-43598b39f1d2" containerName="docker-build" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.855364 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8a9bc97-b82a-4aa9-874d-43598b39f1d2" containerName="docker-build" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.855510 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8a9bc97-b82a-4aa9-874d-43598b39f1d2" containerName="docker-build" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.856413 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.861013 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-fwd7j" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.861143 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-sys-config" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.864683 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-global-ca" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.866328 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-ca" Dec 03 00:29:41 crc kubenswrapper[4805]: I1203 00:29:41.875523 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.023814 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-builder-dockercfg-fwd7j-push\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.023872 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.023914 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.023935 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.024001 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.024033 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.024066 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-builder-dockercfg-fwd7j-pull\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.024102 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jvb6\" (UniqueName: \"kubernetes.io/projected/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-kube-api-access-5jvb6\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.024152 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.024174 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.024260 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.024290 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.125827 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.125898 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.125929 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-builder-dockercfg-fwd7j-pull\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.125990 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jvb6\" (UniqueName: \"kubernetes.io/projected/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-kube-api-access-5jvb6\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.126023 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.126238 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.126388 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.126789 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.126889 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.126914 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.126938 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.126963 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-builder-dockercfg-fwd7j-push\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.126982 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.127006 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.127026 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.127907 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.127983 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.127987 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.128148 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.128296 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.128693 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.130954 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-builder-dockercfg-fwd7j-push\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.131055 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-builder-dockercfg-fwd7j-pull\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.165941 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jvb6\" (UniqueName: \"kubernetes.io/projected/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-kube-api-access-5jvb6\") pod \"sg-bridge-2-build\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.175068 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.432795 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8a9bc97-b82a-4aa9-874d-43598b39f1d2" path="/var/lib/kubelet/pods/e8a9bc97-b82a-4aa9-874d-43598b39f1d2/volumes" Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.657395 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Dec 03 00:29:42 crc kubenswrapper[4805]: I1203 00:29:42.776693 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac","Type":"ContainerStarted","Data":"a3f3b66b9aae4bc34e7b41ef1b72ae5d63b1baab614434e52e02b470854715a0"} Dec 03 00:29:43 crc kubenswrapper[4805]: I1203 00:29:43.786814 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac","Type":"ContainerStarted","Data":"6ec27def9096cde3a8efabdae3e93a2666cfccb9f80dce2cba9a6b692ba878c0"} Dec 03 00:29:44 crc kubenswrapper[4805]: I1203 00:29:44.796867 4805 generic.go:334] "Generic (PLEG): container finished" podID="5ff1a9d5-5edc-4ecb-8f79-5ac953580aac" containerID="6ec27def9096cde3a8efabdae3e93a2666cfccb9f80dce2cba9a6b692ba878c0" exitCode=0 Dec 03 00:29:44 crc kubenswrapper[4805]: I1203 00:29:44.797012 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac","Type":"ContainerDied","Data":"6ec27def9096cde3a8efabdae3e93a2666cfccb9f80dce2cba9a6b692ba878c0"} Dec 03 00:29:45 crc kubenswrapper[4805]: I1203 00:29:45.805862 4805 generic.go:334] "Generic (PLEG): container finished" podID="5ff1a9d5-5edc-4ecb-8f79-5ac953580aac" containerID="c89c0115d18d5cefa7f68c61a626c9a84847a3458bd27e11af891cb14b279d61" exitCode=0 Dec 03 00:29:45 crc kubenswrapper[4805]: I1203 00:29:45.806190 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac","Type":"ContainerDied","Data":"c89c0115d18d5cefa7f68c61a626c9a84847a3458bd27e11af891cb14b279d61"} Dec 03 00:29:45 crc kubenswrapper[4805]: I1203 00:29:45.868435 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_5ff1a9d5-5edc-4ecb-8f79-5ac953580aac/manage-dockerfile/0.log" Dec 03 00:29:46 crc kubenswrapper[4805]: I1203 00:29:46.819046 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac","Type":"ContainerStarted","Data":"fff3914a3169b957cd540acffcfc9ce7bbfd5d65d8f6dcff887fd0df30a23898"} Dec 03 00:29:46 crc kubenswrapper[4805]: I1203 00:29:46.847152 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=5.847134294 podStartE2EDuration="5.847134294s" podCreationTimestamp="2025-12-03 00:29:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:29:46.84205269 +0000 UTC m=+1410.691015296" watchObservedRunningTime="2025-12-03 00:29:46.847134294 +0000 UTC m=+1410.696096900" Dec 03 00:29:47 crc kubenswrapper[4805]: I1203 00:29:47.811973 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:29:47 crc kubenswrapper[4805]: I1203 00:29:47.812079 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:29:54 crc kubenswrapper[4805]: I1203 00:29:54.018071 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-262vx"] Dec 03 00:29:54 crc kubenswrapper[4805]: I1203 00:29:54.019851 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-262vx" Dec 03 00:29:54 crc kubenswrapper[4805]: I1203 00:29:54.039130 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-262vx"] Dec 03 00:29:54 crc kubenswrapper[4805]: I1203 00:29:54.123649 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45b4c852-bd5c-4d8f-b59b-b7c24fced2ad-utilities\") pod \"redhat-operators-262vx\" (UID: \"45b4c852-bd5c-4d8f-b59b-b7c24fced2ad\") " pod="openshift-marketplace/redhat-operators-262vx" Dec 03 00:29:54 crc kubenswrapper[4805]: I1203 00:29:54.123737 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zq6s\" (UniqueName: \"kubernetes.io/projected/45b4c852-bd5c-4d8f-b59b-b7c24fced2ad-kube-api-access-6zq6s\") pod \"redhat-operators-262vx\" (UID: \"45b4c852-bd5c-4d8f-b59b-b7c24fced2ad\") " pod="openshift-marketplace/redhat-operators-262vx" Dec 03 00:29:54 crc kubenswrapper[4805]: I1203 00:29:54.123772 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45b4c852-bd5c-4d8f-b59b-b7c24fced2ad-catalog-content\") pod \"redhat-operators-262vx\" (UID: \"45b4c852-bd5c-4d8f-b59b-b7c24fced2ad\") " pod="openshift-marketplace/redhat-operators-262vx" Dec 03 00:29:54 crc kubenswrapper[4805]: I1203 00:29:54.224838 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45b4c852-bd5c-4d8f-b59b-b7c24fced2ad-utilities\") pod \"redhat-operators-262vx\" (UID: \"45b4c852-bd5c-4d8f-b59b-b7c24fced2ad\") " pod="openshift-marketplace/redhat-operators-262vx" Dec 03 00:29:54 crc kubenswrapper[4805]: I1203 00:29:54.225256 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zq6s\" (UniqueName: \"kubernetes.io/projected/45b4c852-bd5c-4d8f-b59b-b7c24fced2ad-kube-api-access-6zq6s\") pod \"redhat-operators-262vx\" (UID: \"45b4c852-bd5c-4d8f-b59b-b7c24fced2ad\") " pod="openshift-marketplace/redhat-operators-262vx" Dec 03 00:29:54 crc kubenswrapper[4805]: I1203 00:29:54.225368 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45b4c852-bd5c-4d8f-b59b-b7c24fced2ad-catalog-content\") pod \"redhat-operators-262vx\" (UID: \"45b4c852-bd5c-4d8f-b59b-b7c24fced2ad\") " pod="openshift-marketplace/redhat-operators-262vx" Dec 03 00:29:54 crc kubenswrapper[4805]: I1203 00:29:54.225415 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45b4c852-bd5c-4d8f-b59b-b7c24fced2ad-utilities\") pod \"redhat-operators-262vx\" (UID: \"45b4c852-bd5c-4d8f-b59b-b7c24fced2ad\") " pod="openshift-marketplace/redhat-operators-262vx" Dec 03 00:29:54 crc kubenswrapper[4805]: I1203 00:29:54.225725 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45b4c852-bd5c-4d8f-b59b-b7c24fced2ad-catalog-content\") pod \"redhat-operators-262vx\" (UID: \"45b4c852-bd5c-4d8f-b59b-b7c24fced2ad\") " pod="openshift-marketplace/redhat-operators-262vx" Dec 03 00:29:54 crc kubenswrapper[4805]: I1203 00:29:54.254447 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zq6s\" (UniqueName: \"kubernetes.io/projected/45b4c852-bd5c-4d8f-b59b-b7c24fced2ad-kube-api-access-6zq6s\") pod \"redhat-operators-262vx\" (UID: \"45b4c852-bd5c-4d8f-b59b-b7c24fced2ad\") " pod="openshift-marketplace/redhat-operators-262vx" Dec 03 00:29:54 crc kubenswrapper[4805]: I1203 00:29:54.335455 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-262vx" Dec 03 00:29:54 crc kubenswrapper[4805]: I1203 00:29:54.647953 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-262vx"] Dec 03 00:29:54 crc kubenswrapper[4805]: I1203 00:29:54.898448 4805 generic.go:334] "Generic (PLEG): container finished" podID="45b4c852-bd5c-4d8f-b59b-b7c24fced2ad" containerID="010b1dce12d454a7bb2cb275aca376ee2862835d13d95e39c6a61acd1adb9299" exitCode=0 Dec 03 00:29:54 crc kubenswrapper[4805]: I1203 00:29:54.898814 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-262vx" event={"ID":"45b4c852-bd5c-4d8f-b59b-b7c24fced2ad","Type":"ContainerDied","Data":"010b1dce12d454a7bb2cb275aca376ee2862835d13d95e39c6a61acd1adb9299"} Dec 03 00:29:54 crc kubenswrapper[4805]: I1203 00:29:54.898845 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-262vx" event={"ID":"45b4c852-bd5c-4d8f-b59b-b7c24fced2ad","Type":"ContainerStarted","Data":"d23b46ab7e10d4158eeca99170b1f9aa3c9357de9e4654d9625599ae48d1c824"} Dec 03 00:29:54 crc kubenswrapper[4805]: I1203 00:29:54.900704 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 00:29:55 crc kubenswrapper[4805]: I1203 00:29:55.906937 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-262vx" event={"ID":"45b4c852-bd5c-4d8f-b59b-b7c24fced2ad","Type":"ContainerStarted","Data":"3571802f06d2c9f67c0fe6ebbd9cf4863d07b0e66feb62a50dbee786bcc76718"} Dec 03 00:29:56 crc kubenswrapper[4805]: I1203 00:29:56.919111 4805 generic.go:334] "Generic (PLEG): container finished" podID="45b4c852-bd5c-4d8f-b59b-b7c24fced2ad" containerID="3571802f06d2c9f67c0fe6ebbd9cf4863d07b0e66feb62a50dbee786bcc76718" exitCode=0 Dec 03 00:29:56 crc kubenswrapper[4805]: I1203 00:29:56.919253 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-262vx" event={"ID":"45b4c852-bd5c-4d8f-b59b-b7c24fced2ad","Type":"ContainerDied","Data":"3571802f06d2c9f67c0fe6ebbd9cf4863d07b0e66feb62a50dbee786bcc76718"} Dec 03 00:29:58 crc kubenswrapper[4805]: I1203 00:29:58.958739 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-262vx" event={"ID":"45b4c852-bd5c-4d8f-b59b-b7c24fced2ad","Type":"ContainerStarted","Data":"1774e56d2a7f92de01c0fdff24cc2016b1d8d1aa66f822c4a05c33fca872c2e1"} Dec 03 00:29:58 crc kubenswrapper[4805]: I1203 00:29:58.983356 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-262vx" podStartSLOduration=2.688452797 podStartE2EDuration="5.98333112s" podCreationTimestamp="2025-12-03 00:29:53 +0000 UTC" firstStartedPulling="2025-12-03 00:29:54.900466912 +0000 UTC m=+1418.749429518" lastFinishedPulling="2025-12-03 00:29:58.195345235 +0000 UTC m=+1422.044307841" observedRunningTime="2025-12-03 00:29:58.980401348 +0000 UTC m=+1422.829363964" watchObservedRunningTime="2025-12-03 00:29:58.98333112 +0000 UTC m=+1422.832293726" Dec 03 00:30:00 crc kubenswrapper[4805]: I1203 00:30:00.158366 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412030-8qk44"] Dec 03 00:30:00 crc kubenswrapper[4805]: I1203 00:30:00.159421 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-8qk44" Dec 03 00:30:00 crc kubenswrapper[4805]: I1203 00:30:00.162339 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 00:30:00 crc kubenswrapper[4805]: I1203 00:30:00.162589 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 00:30:00 crc kubenswrapper[4805]: I1203 00:30:00.176918 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412030-8qk44"] Dec 03 00:30:00 crc kubenswrapper[4805]: I1203 00:30:00.218561 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75285954-1659-4c12-a986-d26731505a53-config-volume\") pod \"collect-profiles-29412030-8qk44\" (UID: \"75285954-1659-4c12-a986-d26731505a53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-8qk44" Dec 03 00:30:00 crc kubenswrapper[4805]: I1203 00:30:00.218631 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75285954-1659-4c12-a986-d26731505a53-secret-volume\") pod \"collect-profiles-29412030-8qk44\" (UID: \"75285954-1659-4c12-a986-d26731505a53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-8qk44" Dec 03 00:30:00 crc kubenswrapper[4805]: I1203 00:30:00.218713 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6gj8\" (UniqueName: \"kubernetes.io/projected/75285954-1659-4c12-a986-d26731505a53-kube-api-access-j6gj8\") pod \"collect-profiles-29412030-8qk44\" (UID: \"75285954-1659-4c12-a986-d26731505a53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-8qk44" Dec 03 00:30:00 crc kubenswrapper[4805]: I1203 00:30:00.320605 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75285954-1659-4c12-a986-d26731505a53-config-volume\") pod \"collect-profiles-29412030-8qk44\" (UID: \"75285954-1659-4c12-a986-d26731505a53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-8qk44" Dec 03 00:30:00 crc kubenswrapper[4805]: I1203 00:30:00.320652 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75285954-1659-4c12-a986-d26731505a53-secret-volume\") pod \"collect-profiles-29412030-8qk44\" (UID: \"75285954-1659-4c12-a986-d26731505a53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-8qk44" Dec 03 00:30:00 crc kubenswrapper[4805]: I1203 00:30:00.320704 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6gj8\" (UniqueName: \"kubernetes.io/projected/75285954-1659-4c12-a986-d26731505a53-kube-api-access-j6gj8\") pod \"collect-profiles-29412030-8qk44\" (UID: \"75285954-1659-4c12-a986-d26731505a53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-8qk44" Dec 03 00:30:00 crc kubenswrapper[4805]: I1203 00:30:00.321577 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75285954-1659-4c12-a986-d26731505a53-config-volume\") pod \"collect-profiles-29412030-8qk44\" (UID: \"75285954-1659-4c12-a986-d26731505a53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-8qk44" Dec 03 00:30:00 crc kubenswrapper[4805]: I1203 00:30:00.336664 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6gj8\" (UniqueName: \"kubernetes.io/projected/75285954-1659-4c12-a986-d26731505a53-kube-api-access-j6gj8\") pod \"collect-profiles-29412030-8qk44\" (UID: \"75285954-1659-4c12-a986-d26731505a53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-8qk44" Dec 03 00:30:00 crc kubenswrapper[4805]: I1203 00:30:00.336729 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75285954-1659-4c12-a986-d26731505a53-secret-volume\") pod \"collect-profiles-29412030-8qk44\" (UID: \"75285954-1659-4c12-a986-d26731505a53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-8qk44" Dec 03 00:30:00 crc kubenswrapper[4805]: I1203 00:30:00.479855 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-8qk44" Dec 03 00:30:00 crc kubenswrapper[4805]: I1203 00:30:00.720899 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412030-8qk44"] Dec 03 00:30:00 crc kubenswrapper[4805]: W1203 00:30:00.738408 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75285954_1659_4c12_a986_d26731505a53.slice/crio-a0e4dfcfba078903fd768b7be436670dc65b2aea385215a48b392abba3d855bb WatchSource:0}: Error finding container a0e4dfcfba078903fd768b7be436670dc65b2aea385215a48b392abba3d855bb: Status 404 returned error can't find the container with id a0e4dfcfba078903fd768b7be436670dc65b2aea385215a48b392abba3d855bb Dec 03 00:30:00 crc kubenswrapper[4805]: I1203 00:30:00.972396 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-8qk44" event={"ID":"75285954-1659-4c12-a986-d26731505a53","Type":"ContainerStarted","Data":"a0e4dfcfba078903fd768b7be436670dc65b2aea385215a48b392abba3d855bb"} Dec 03 00:30:01 crc kubenswrapper[4805]: I1203 00:30:01.981225 4805 generic.go:334] "Generic (PLEG): container finished" podID="75285954-1659-4c12-a986-d26731505a53" containerID="8cf052dbe8579710d58e34850666524f09f640dea6d2a25539b3dcbb109ce74c" exitCode=0 Dec 03 00:30:01 crc kubenswrapper[4805]: I1203 00:30:01.981382 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-8qk44" event={"ID":"75285954-1659-4c12-a986-d26731505a53","Type":"ContainerDied","Data":"8cf052dbe8579710d58e34850666524f09f640dea6d2a25539b3dcbb109ce74c"} Dec 03 00:30:03 crc kubenswrapper[4805]: I1203 00:30:03.213807 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-8qk44" Dec 03 00:30:03 crc kubenswrapper[4805]: I1203 00:30:03.265084 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75285954-1659-4c12-a986-d26731505a53-secret-volume\") pod \"75285954-1659-4c12-a986-d26731505a53\" (UID: \"75285954-1659-4c12-a986-d26731505a53\") " Dec 03 00:30:03 crc kubenswrapper[4805]: I1203 00:30:03.265210 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75285954-1659-4c12-a986-d26731505a53-config-volume\") pod \"75285954-1659-4c12-a986-d26731505a53\" (UID: \"75285954-1659-4c12-a986-d26731505a53\") " Dec 03 00:30:03 crc kubenswrapper[4805]: I1203 00:30:03.265300 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6gj8\" (UniqueName: \"kubernetes.io/projected/75285954-1659-4c12-a986-d26731505a53-kube-api-access-j6gj8\") pod \"75285954-1659-4c12-a986-d26731505a53\" (UID: \"75285954-1659-4c12-a986-d26731505a53\") " Dec 03 00:30:03 crc kubenswrapper[4805]: I1203 00:30:03.266098 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75285954-1659-4c12-a986-d26731505a53-config-volume" (OuterVolumeSpecName: "config-volume") pod "75285954-1659-4c12-a986-d26731505a53" (UID: "75285954-1659-4c12-a986-d26731505a53"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:30:03 crc kubenswrapper[4805]: I1203 00:30:03.271368 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75285954-1659-4c12-a986-d26731505a53-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "75285954-1659-4c12-a986-d26731505a53" (UID: "75285954-1659-4c12-a986-d26731505a53"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:30:03 crc kubenswrapper[4805]: I1203 00:30:03.271860 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75285954-1659-4c12-a986-d26731505a53-kube-api-access-j6gj8" (OuterVolumeSpecName: "kube-api-access-j6gj8") pod "75285954-1659-4c12-a986-d26731505a53" (UID: "75285954-1659-4c12-a986-d26731505a53"). InnerVolumeSpecName "kube-api-access-j6gj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:30:03 crc kubenswrapper[4805]: I1203 00:30:03.367301 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75285954-1659-4c12-a986-d26731505a53-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:03 crc kubenswrapper[4805]: I1203 00:30:03.367352 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75285954-1659-4c12-a986-d26731505a53-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:03 crc kubenswrapper[4805]: I1203 00:30:03.367364 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6gj8\" (UniqueName: \"kubernetes.io/projected/75285954-1659-4c12-a986-d26731505a53-kube-api-access-j6gj8\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:03 crc kubenswrapper[4805]: I1203 00:30:03.998324 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-8qk44" event={"ID":"75285954-1659-4c12-a986-d26731505a53","Type":"ContainerDied","Data":"a0e4dfcfba078903fd768b7be436670dc65b2aea385215a48b392abba3d855bb"} Dec 03 00:30:03 crc kubenswrapper[4805]: I1203 00:30:03.998394 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0e4dfcfba078903fd768b7be436670dc65b2aea385215a48b392abba3d855bb" Dec 03 00:30:03 crc kubenswrapper[4805]: I1203 00:30:03.998432 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412030-8qk44" Dec 03 00:30:04 crc kubenswrapper[4805]: I1203 00:30:04.336581 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-262vx" Dec 03 00:30:04 crc kubenswrapper[4805]: I1203 00:30:04.336704 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-262vx" Dec 03 00:30:04 crc kubenswrapper[4805]: I1203 00:30:04.395838 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-262vx" Dec 03 00:30:05 crc kubenswrapper[4805]: I1203 00:30:05.050383 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-262vx" Dec 03 00:30:05 crc kubenswrapper[4805]: I1203 00:30:05.113732 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-262vx"] Dec 03 00:30:07 crc kubenswrapper[4805]: I1203 00:30:07.020970 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-262vx" podUID="45b4c852-bd5c-4d8f-b59b-b7c24fced2ad" containerName="registry-server" containerID="cri-o://1774e56d2a7f92de01c0fdff24cc2016b1d8d1aa66f822c4a05c33fca872c2e1" gracePeriod=2 Dec 03 00:30:08 crc kubenswrapper[4805]: I1203 00:30:08.033263 4805 generic.go:334] "Generic (PLEG): container finished" podID="45b4c852-bd5c-4d8f-b59b-b7c24fced2ad" containerID="1774e56d2a7f92de01c0fdff24cc2016b1d8d1aa66f822c4a05c33fca872c2e1" exitCode=0 Dec 03 00:30:08 crc kubenswrapper[4805]: I1203 00:30:08.033347 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-262vx" event={"ID":"45b4c852-bd5c-4d8f-b59b-b7c24fced2ad","Type":"ContainerDied","Data":"1774e56d2a7f92de01c0fdff24cc2016b1d8d1aa66f822c4a05c33fca872c2e1"} Dec 03 00:30:08 crc kubenswrapper[4805]: I1203 00:30:08.550708 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-262vx" Dec 03 00:30:08 crc kubenswrapper[4805]: I1203 00:30:08.567015 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45b4c852-bd5c-4d8f-b59b-b7c24fced2ad-catalog-content\") pod \"45b4c852-bd5c-4d8f-b59b-b7c24fced2ad\" (UID: \"45b4c852-bd5c-4d8f-b59b-b7c24fced2ad\") " Dec 03 00:30:08 crc kubenswrapper[4805]: I1203 00:30:08.567112 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zq6s\" (UniqueName: \"kubernetes.io/projected/45b4c852-bd5c-4d8f-b59b-b7c24fced2ad-kube-api-access-6zq6s\") pod \"45b4c852-bd5c-4d8f-b59b-b7c24fced2ad\" (UID: \"45b4c852-bd5c-4d8f-b59b-b7c24fced2ad\") " Dec 03 00:30:08 crc kubenswrapper[4805]: I1203 00:30:08.567320 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45b4c852-bd5c-4d8f-b59b-b7c24fced2ad-utilities\") pod \"45b4c852-bd5c-4d8f-b59b-b7c24fced2ad\" (UID: \"45b4c852-bd5c-4d8f-b59b-b7c24fced2ad\") " Dec 03 00:30:08 crc kubenswrapper[4805]: I1203 00:30:08.577457 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45b4c852-bd5c-4d8f-b59b-b7c24fced2ad-kube-api-access-6zq6s" (OuterVolumeSpecName: "kube-api-access-6zq6s") pod "45b4c852-bd5c-4d8f-b59b-b7c24fced2ad" (UID: "45b4c852-bd5c-4d8f-b59b-b7c24fced2ad"). InnerVolumeSpecName "kube-api-access-6zq6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:30:08 crc kubenswrapper[4805]: I1203 00:30:08.581112 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45b4c852-bd5c-4d8f-b59b-b7c24fced2ad-utilities" (OuterVolumeSpecName: "utilities") pod "45b4c852-bd5c-4d8f-b59b-b7c24fced2ad" (UID: "45b4c852-bd5c-4d8f-b59b-b7c24fced2ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:30:08 crc kubenswrapper[4805]: I1203 00:30:08.668705 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45b4c852-bd5c-4d8f-b59b-b7c24fced2ad-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:08 crc kubenswrapper[4805]: I1203 00:30:08.668742 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zq6s\" (UniqueName: \"kubernetes.io/projected/45b4c852-bd5c-4d8f-b59b-b7c24fced2ad-kube-api-access-6zq6s\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:08 crc kubenswrapper[4805]: I1203 00:30:08.710714 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45b4c852-bd5c-4d8f-b59b-b7c24fced2ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45b4c852-bd5c-4d8f-b59b-b7c24fced2ad" (UID: "45b4c852-bd5c-4d8f-b59b-b7c24fced2ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:30:08 crc kubenswrapper[4805]: I1203 00:30:08.771745 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45b4c852-bd5c-4d8f-b59b-b7c24fced2ad-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:09 crc kubenswrapper[4805]: I1203 00:30:09.043304 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-262vx" event={"ID":"45b4c852-bd5c-4d8f-b59b-b7c24fced2ad","Type":"ContainerDied","Data":"d23b46ab7e10d4158eeca99170b1f9aa3c9357de9e4654d9625599ae48d1c824"} Dec 03 00:30:09 crc kubenswrapper[4805]: I1203 00:30:09.043385 4805 scope.go:117] "RemoveContainer" containerID="1774e56d2a7f92de01c0fdff24cc2016b1d8d1aa66f822c4a05c33fca872c2e1" Dec 03 00:30:09 crc kubenswrapper[4805]: I1203 00:30:09.043619 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-262vx" Dec 03 00:30:09 crc kubenswrapper[4805]: I1203 00:30:09.095463 4805 scope.go:117] "RemoveContainer" containerID="3571802f06d2c9f67c0fe6ebbd9cf4863d07b0e66feb62a50dbee786bcc76718" Dec 03 00:30:09 crc kubenswrapper[4805]: I1203 00:30:09.106363 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-262vx"] Dec 03 00:30:09 crc kubenswrapper[4805]: I1203 00:30:09.121838 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-262vx"] Dec 03 00:30:09 crc kubenswrapper[4805]: I1203 00:30:09.132480 4805 scope.go:117] "RemoveContainer" containerID="010b1dce12d454a7bb2cb275aca376ee2862835d13d95e39c6a61acd1adb9299" Dec 03 00:30:10 crc kubenswrapper[4805]: I1203 00:30:10.435691 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45b4c852-bd5c-4d8f-b59b-b7c24fced2ad" path="/var/lib/kubelet/pods/45b4c852-bd5c-4d8f-b59b-b7c24fced2ad/volumes" Dec 03 00:30:17 crc kubenswrapper[4805]: I1203 00:30:17.811416 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:30:17 crc kubenswrapper[4805]: I1203 00:30:17.812241 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:30:37 crc kubenswrapper[4805]: I1203 00:30:37.602059 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-77h8j"] Dec 03 00:30:37 crc kubenswrapper[4805]: E1203 00:30:37.603269 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b4c852-bd5c-4d8f-b59b-b7c24fced2ad" containerName="extract-utilities" Dec 03 00:30:37 crc kubenswrapper[4805]: I1203 00:30:37.603295 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b4c852-bd5c-4d8f-b59b-b7c24fced2ad" containerName="extract-utilities" Dec 03 00:30:37 crc kubenswrapper[4805]: E1203 00:30:37.603318 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b4c852-bd5c-4d8f-b59b-b7c24fced2ad" containerName="registry-server" Dec 03 00:30:37 crc kubenswrapper[4805]: I1203 00:30:37.603330 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b4c852-bd5c-4d8f-b59b-b7c24fced2ad" containerName="registry-server" Dec 03 00:30:37 crc kubenswrapper[4805]: E1203 00:30:37.603350 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b4c852-bd5c-4d8f-b59b-b7c24fced2ad" containerName="extract-content" Dec 03 00:30:37 crc kubenswrapper[4805]: I1203 00:30:37.603361 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b4c852-bd5c-4d8f-b59b-b7c24fced2ad" containerName="extract-content" Dec 03 00:30:37 crc kubenswrapper[4805]: E1203 00:30:37.603372 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75285954-1659-4c12-a986-d26731505a53" containerName="collect-profiles" Dec 03 00:30:37 crc kubenswrapper[4805]: I1203 00:30:37.603384 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="75285954-1659-4c12-a986-d26731505a53" containerName="collect-profiles" Dec 03 00:30:37 crc kubenswrapper[4805]: I1203 00:30:37.603548 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="45b4c852-bd5c-4d8f-b59b-b7c24fced2ad" containerName="registry-server" Dec 03 00:30:37 crc kubenswrapper[4805]: I1203 00:30:37.603569 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="75285954-1659-4c12-a986-d26731505a53" containerName="collect-profiles" Dec 03 00:30:37 crc kubenswrapper[4805]: I1203 00:30:37.605147 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-77h8j" Dec 03 00:30:37 crc kubenswrapper[4805]: I1203 00:30:37.618010 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-77h8j"] Dec 03 00:30:37 crc kubenswrapper[4805]: I1203 00:30:37.718043 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfdwx\" (UniqueName: \"kubernetes.io/projected/d9e51dfe-06fb-45da-93ad-2ba00ce6d973-kube-api-access-qfdwx\") pod \"certified-operators-77h8j\" (UID: \"d9e51dfe-06fb-45da-93ad-2ba00ce6d973\") " pod="openshift-marketplace/certified-operators-77h8j" Dec 03 00:30:37 crc kubenswrapper[4805]: I1203 00:30:37.718105 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9e51dfe-06fb-45da-93ad-2ba00ce6d973-utilities\") pod \"certified-operators-77h8j\" (UID: \"d9e51dfe-06fb-45da-93ad-2ba00ce6d973\") " pod="openshift-marketplace/certified-operators-77h8j" Dec 03 00:30:37 crc kubenswrapper[4805]: I1203 00:30:37.718693 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9e51dfe-06fb-45da-93ad-2ba00ce6d973-catalog-content\") pod \"certified-operators-77h8j\" (UID: \"d9e51dfe-06fb-45da-93ad-2ba00ce6d973\") " pod="openshift-marketplace/certified-operators-77h8j" Dec 03 00:30:37 crc kubenswrapper[4805]: I1203 00:30:37.820885 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfdwx\" (UniqueName: \"kubernetes.io/projected/d9e51dfe-06fb-45da-93ad-2ba00ce6d973-kube-api-access-qfdwx\") pod \"certified-operators-77h8j\" (UID: \"d9e51dfe-06fb-45da-93ad-2ba00ce6d973\") " pod="openshift-marketplace/certified-operators-77h8j" Dec 03 00:30:37 crc kubenswrapper[4805]: I1203 00:30:37.820975 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9e51dfe-06fb-45da-93ad-2ba00ce6d973-utilities\") pod \"certified-operators-77h8j\" (UID: \"d9e51dfe-06fb-45da-93ad-2ba00ce6d973\") " pod="openshift-marketplace/certified-operators-77h8j" Dec 03 00:30:37 crc kubenswrapper[4805]: I1203 00:30:37.821011 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9e51dfe-06fb-45da-93ad-2ba00ce6d973-catalog-content\") pod \"certified-operators-77h8j\" (UID: \"d9e51dfe-06fb-45da-93ad-2ba00ce6d973\") " pod="openshift-marketplace/certified-operators-77h8j" Dec 03 00:30:37 crc kubenswrapper[4805]: I1203 00:30:37.821907 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9e51dfe-06fb-45da-93ad-2ba00ce6d973-utilities\") pod \"certified-operators-77h8j\" (UID: \"d9e51dfe-06fb-45da-93ad-2ba00ce6d973\") " pod="openshift-marketplace/certified-operators-77h8j" Dec 03 00:30:37 crc kubenswrapper[4805]: I1203 00:30:37.821917 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9e51dfe-06fb-45da-93ad-2ba00ce6d973-catalog-content\") pod \"certified-operators-77h8j\" (UID: \"d9e51dfe-06fb-45da-93ad-2ba00ce6d973\") " pod="openshift-marketplace/certified-operators-77h8j" Dec 03 00:30:37 crc kubenswrapper[4805]: I1203 00:30:37.857473 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfdwx\" (UniqueName: \"kubernetes.io/projected/d9e51dfe-06fb-45da-93ad-2ba00ce6d973-kube-api-access-qfdwx\") pod \"certified-operators-77h8j\" (UID: \"d9e51dfe-06fb-45da-93ad-2ba00ce6d973\") " pod="openshift-marketplace/certified-operators-77h8j" Dec 03 00:30:37 crc kubenswrapper[4805]: I1203 00:30:37.933264 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-77h8j" Dec 03 00:30:38 crc kubenswrapper[4805]: I1203 00:30:38.233255 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-77h8j"] Dec 03 00:30:38 crc kubenswrapper[4805]: I1203 00:30:38.318590 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77h8j" event={"ID":"d9e51dfe-06fb-45da-93ad-2ba00ce6d973","Type":"ContainerStarted","Data":"914c4c551de1a7b7660f88bf1340faa1fa406156fb9d7861ae39453914830df9"} Dec 03 00:30:39 crc kubenswrapper[4805]: I1203 00:30:39.327860 4805 generic.go:334] "Generic (PLEG): container finished" podID="d9e51dfe-06fb-45da-93ad-2ba00ce6d973" containerID="b32aa7269bcda89d61347a835c47074cc08c777723c1587094f5d1b557156fff" exitCode=0 Dec 03 00:30:39 crc kubenswrapper[4805]: I1203 00:30:39.327950 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77h8j" event={"ID":"d9e51dfe-06fb-45da-93ad-2ba00ce6d973","Type":"ContainerDied","Data":"b32aa7269bcda89d61347a835c47074cc08c777723c1587094f5d1b557156fff"} Dec 03 00:30:41 crc kubenswrapper[4805]: I1203 00:30:41.345028 4805 generic.go:334] "Generic (PLEG): container finished" podID="d9e51dfe-06fb-45da-93ad-2ba00ce6d973" containerID="10f0159f2291fe39d13c2ecdff0e1ac5f46a1fd999cc4cc56a621430f87f295d" exitCode=0 Dec 03 00:30:41 crc kubenswrapper[4805]: I1203 00:30:41.345112 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77h8j" event={"ID":"d9e51dfe-06fb-45da-93ad-2ba00ce6d973","Type":"ContainerDied","Data":"10f0159f2291fe39d13c2ecdff0e1ac5f46a1fd999cc4cc56a621430f87f295d"} Dec 03 00:30:42 crc kubenswrapper[4805]: I1203 00:30:42.354003 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77h8j" event={"ID":"d9e51dfe-06fb-45da-93ad-2ba00ce6d973","Type":"ContainerStarted","Data":"9be5e8a4b8ffe244ae2343df915c91f3b3f32194ba06cbe49d8335b40be1f00f"} Dec 03 00:30:42 crc kubenswrapper[4805]: I1203 00:30:42.373498 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-77h8j" podStartSLOduration=2.952296348 podStartE2EDuration="5.373465731s" podCreationTimestamp="2025-12-03 00:30:37 +0000 UTC" firstStartedPulling="2025-12-03 00:30:39.330272491 +0000 UTC m=+1463.179235107" lastFinishedPulling="2025-12-03 00:30:41.751441884 +0000 UTC m=+1465.600404490" observedRunningTime="2025-12-03 00:30:42.371041211 +0000 UTC m=+1466.220003827" watchObservedRunningTime="2025-12-03 00:30:42.373465731 +0000 UTC m=+1466.222428357" Dec 03 00:30:46 crc kubenswrapper[4805]: I1203 00:30:46.392882 4805 generic.go:334] "Generic (PLEG): container finished" podID="5ff1a9d5-5edc-4ecb-8f79-5ac953580aac" containerID="fff3914a3169b957cd540acffcfc9ce7bbfd5d65d8f6dcff887fd0df30a23898" exitCode=0 Dec 03 00:30:46 crc kubenswrapper[4805]: I1203 00:30:46.393009 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac","Type":"ContainerDied","Data":"fff3914a3169b957cd540acffcfc9ce7bbfd5d65d8f6dcff887fd0df30a23898"} Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.634596 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.696978 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-ca-bundles\") pod \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.697296 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jvb6\" (UniqueName: \"kubernetes.io/projected/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-kube-api-access-5jvb6\") pod \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.697318 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-proxy-ca-bundles\") pod \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.697339 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-builder-dockercfg-fwd7j-push\") pod \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.697365 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-builder-dockercfg-fwd7j-pull\") pod \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.697401 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-blob-cache\") pod \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.697416 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-buildcachedir\") pod \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.697442 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-container-storage-run\") pod \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.697466 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-node-pullsecrets\") pod \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.697488 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-system-configs\") pod \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.697517 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-container-storage-root\") pod \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.697541 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-buildworkdir\") pod \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\" (UID: \"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac\") " Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.697544 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac" (UID: "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.697824 4805 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.698121 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac" (UID: "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.698163 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac" (UID: "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.698226 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac" (UID: "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.698536 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac" (UID: "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.699185 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac" (UID: "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.699238 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac" (UID: "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.703314 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-builder-dockercfg-fwd7j-pull" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-pull") pod "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac" (UID: "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac"). InnerVolumeSpecName "builder-dockercfg-fwd7j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.703429 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-builder-dockercfg-fwd7j-push" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-push") pod "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac" (UID: "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac"). InnerVolumeSpecName "builder-dockercfg-fwd7j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.703449 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-kube-api-access-5jvb6" (OuterVolumeSpecName: "kube-api-access-5jvb6") pod "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac" (UID: "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac"). InnerVolumeSpecName "kube-api-access-5jvb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.798357 4805 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.798387 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jvb6\" (UniqueName: \"kubernetes.io/projected/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-kube-api-access-5jvb6\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.798398 4805 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.798407 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-builder-dockercfg-fwd7j-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.798416 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-builder-dockercfg-fwd7j-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.798425 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.798433 4805 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.798440 4805 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.798449 4805 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.803327 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac" (UID: "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.811585 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.811674 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.811733 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.812523 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5bd9b5c258ecf356c62660c4f09c420d8d82addbe1860d108970f42116ef5be"} pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.812629 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" containerID="cri-o://c5bd9b5c258ecf356c62660c4f09c420d8d82addbe1860d108970f42116ef5be" gracePeriod=600 Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.899904 4805 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.934255 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-77h8j" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.934303 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-77h8j" Dec 03 00:30:47 crc kubenswrapper[4805]: I1203 00:30:47.979568 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-77h8j" Dec 03 00:30:48 crc kubenswrapper[4805]: I1203 00:30:48.366740 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac" (UID: "5ff1a9d5-5edc-4ecb-8f79-5ac953580aac"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:30:48 crc kubenswrapper[4805]: I1203 00:30:48.404740 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ff1a9d5-5edc-4ecb-8f79-5ac953580aac-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:48 crc kubenswrapper[4805]: I1203 00:30:48.409659 4805 generic.go:334] "Generic (PLEG): container finished" podID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerID="c5bd9b5c258ecf356c62660c4f09c420d8d82addbe1860d108970f42116ef5be" exitCode=0 Dec 03 00:30:48 crc kubenswrapper[4805]: I1203 00:30:48.409730 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" event={"ID":"42d6da4d-d781-4243-b5c3-28a8cf91ef53","Type":"ContainerDied","Data":"c5bd9b5c258ecf356c62660c4f09c420d8d82addbe1860d108970f42116ef5be"} Dec 03 00:30:48 crc kubenswrapper[4805]: I1203 00:30:48.409773 4805 scope.go:117] "RemoveContainer" containerID="a6a15cf13acceb9a934b2889a2fbcde236ba182391e642017c50de23b8e1efc7" Dec 03 00:30:48 crc kubenswrapper[4805]: I1203 00:30:48.413228 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Dec 03 00:30:48 crc kubenswrapper[4805]: I1203 00:30:48.413112 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5ff1a9d5-5edc-4ecb-8f79-5ac953580aac","Type":"ContainerDied","Data":"a3f3b66b9aae4bc34e7b41ef1b72ae5d63b1baab614434e52e02b470854715a0"} Dec 03 00:30:48 crc kubenswrapper[4805]: I1203 00:30:48.413667 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3f3b66b9aae4bc34e7b41ef1b72ae5d63b1baab614434e52e02b470854715a0" Dec 03 00:30:48 crc kubenswrapper[4805]: I1203 00:30:48.468739 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-77h8j" Dec 03 00:30:48 crc kubenswrapper[4805]: I1203 00:30:48.510189 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-77h8j"] Dec 03 00:30:49 crc kubenswrapper[4805]: I1203 00:30:49.421588 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" event={"ID":"42d6da4d-d781-4243-b5c3-28a8cf91ef53","Type":"ContainerStarted","Data":"97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18"} Dec 03 00:30:50 crc kubenswrapper[4805]: I1203 00:30:50.427826 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-77h8j" podUID="d9e51dfe-06fb-45da-93ad-2ba00ce6d973" containerName="registry-server" containerID="cri-o://9be5e8a4b8ffe244ae2343df915c91f3b3f32194ba06cbe49d8335b40be1f00f" gracePeriod=2 Dec 03 00:30:50 crc kubenswrapper[4805]: I1203 00:30:50.803333 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-77h8j" Dec 03 00:30:50 crc kubenswrapper[4805]: I1203 00:30:50.939271 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9e51dfe-06fb-45da-93ad-2ba00ce6d973-utilities\") pod \"d9e51dfe-06fb-45da-93ad-2ba00ce6d973\" (UID: \"d9e51dfe-06fb-45da-93ad-2ba00ce6d973\") " Dec 03 00:30:50 crc kubenswrapper[4805]: I1203 00:30:50.939357 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9e51dfe-06fb-45da-93ad-2ba00ce6d973-catalog-content\") pod \"d9e51dfe-06fb-45da-93ad-2ba00ce6d973\" (UID: \"d9e51dfe-06fb-45da-93ad-2ba00ce6d973\") " Dec 03 00:30:50 crc kubenswrapper[4805]: I1203 00:30:50.939568 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfdwx\" (UniqueName: \"kubernetes.io/projected/d9e51dfe-06fb-45da-93ad-2ba00ce6d973-kube-api-access-qfdwx\") pod \"d9e51dfe-06fb-45da-93ad-2ba00ce6d973\" (UID: \"d9e51dfe-06fb-45da-93ad-2ba00ce6d973\") " Dec 03 00:30:50 crc kubenswrapper[4805]: I1203 00:30:50.940792 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9e51dfe-06fb-45da-93ad-2ba00ce6d973-utilities" (OuterVolumeSpecName: "utilities") pod "d9e51dfe-06fb-45da-93ad-2ba00ce6d973" (UID: "d9e51dfe-06fb-45da-93ad-2ba00ce6d973"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:30:50 crc kubenswrapper[4805]: I1203 00:30:50.948527 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9e51dfe-06fb-45da-93ad-2ba00ce6d973-kube-api-access-qfdwx" (OuterVolumeSpecName: "kube-api-access-qfdwx") pod "d9e51dfe-06fb-45da-93ad-2ba00ce6d973" (UID: "d9e51dfe-06fb-45da-93ad-2ba00ce6d973"). InnerVolumeSpecName "kube-api-access-qfdwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:30:51 crc kubenswrapper[4805]: I1203 00:30:51.040876 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9e51dfe-06fb-45da-93ad-2ba00ce6d973-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:51 crc kubenswrapper[4805]: I1203 00:30:51.040914 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfdwx\" (UniqueName: \"kubernetes.io/projected/d9e51dfe-06fb-45da-93ad-2ba00ce6d973-kube-api-access-qfdwx\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:51 crc kubenswrapper[4805]: I1203 00:30:51.230927 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9e51dfe-06fb-45da-93ad-2ba00ce6d973-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9e51dfe-06fb-45da-93ad-2ba00ce6d973" (UID: "d9e51dfe-06fb-45da-93ad-2ba00ce6d973"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:30:51 crc kubenswrapper[4805]: I1203 00:30:51.242834 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9e51dfe-06fb-45da-93ad-2ba00ce6d973-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:30:51 crc kubenswrapper[4805]: I1203 00:30:51.435827 4805 generic.go:334] "Generic (PLEG): container finished" podID="d9e51dfe-06fb-45da-93ad-2ba00ce6d973" containerID="9be5e8a4b8ffe244ae2343df915c91f3b3f32194ba06cbe49d8335b40be1f00f" exitCode=0 Dec 03 00:30:51 crc kubenswrapper[4805]: I1203 00:30:51.435892 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77h8j" event={"ID":"d9e51dfe-06fb-45da-93ad-2ba00ce6d973","Type":"ContainerDied","Data":"9be5e8a4b8ffe244ae2343df915c91f3b3f32194ba06cbe49d8335b40be1f00f"} Dec 03 00:30:51 crc kubenswrapper[4805]: I1203 00:30:51.435948 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77h8j" event={"ID":"d9e51dfe-06fb-45da-93ad-2ba00ce6d973","Type":"ContainerDied","Data":"914c4c551de1a7b7660f88bf1340faa1fa406156fb9d7861ae39453914830df9"} Dec 03 00:30:51 crc kubenswrapper[4805]: I1203 00:30:51.435970 4805 scope.go:117] "RemoveContainer" containerID="9be5e8a4b8ffe244ae2343df915c91f3b3f32194ba06cbe49d8335b40be1f00f" Dec 03 00:30:51 crc kubenswrapper[4805]: I1203 00:30:51.435909 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-77h8j" Dec 03 00:30:51 crc kubenswrapper[4805]: I1203 00:30:51.457287 4805 scope.go:117] "RemoveContainer" containerID="10f0159f2291fe39d13c2ecdff0e1ac5f46a1fd999cc4cc56a621430f87f295d" Dec 03 00:30:51 crc kubenswrapper[4805]: I1203 00:30:51.476443 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-77h8j"] Dec 03 00:30:51 crc kubenswrapper[4805]: I1203 00:30:51.479570 4805 scope.go:117] "RemoveContainer" containerID="b32aa7269bcda89d61347a835c47074cc08c777723c1587094f5d1b557156fff" Dec 03 00:30:51 crc kubenswrapper[4805]: I1203 00:30:51.485541 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-77h8j"] Dec 03 00:30:51 crc kubenswrapper[4805]: I1203 00:30:51.500491 4805 scope.go:117] "RemoveContainer" containerID="9be5e8a4b8ffe244ae2343df915c91f3b3f32194ba06cbe49d8335b40be1f00f" Dec 03 00:30:51 crc kubenswrapper[4805]: E1203 00:30:51.500912 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9be5e8a4b8ffe244ae2343df915c91f3b3f32194ba06cbe49d8335b40be1f00f\": container with ID starting with 9be5e8a4b8ffe244ae2343df915c91f3b3f32194ba06cbe49d8335b40be1f00f not found: ID does not exist" containerID="9be5e8a4b8ffe244ae2343df915c91f3b3f32194ba06cbe49d8335b40be1f00f" Dec 03 00:30:51 crc kubenswrapper[4805]: I1203 00:30:51.500954 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9be5e8a4b8ffe244ae2343df915c91f3b3f32194ba06cbe49d8335b40be1f00f"} err="failed to get container status \"9be5e8a4b8ffe244ae2343df915c91f3b3f32194ba06cbe49d8335b40be1f00f\": rpc error: code = NotFound desc = could not find container \"9be5e8a4b8ffe244ae2343df915c91f3b3f32194ba06cbe49d8335b40be1f00f\": container with ID starting with 9be5e8a4b8ffe244ae2343df915c91f3b3f32194ba06cbe49d8335b40be1f00f not found: ID does not exist" Dec 03 00:30:51 crc kubenswrapper[4805]: I1203 00:30:51.500980 4805 scope.go:117] "RemoveContainer" containerID="10f0159f2291fe39d13c2ecdff0e1ac5f46a1fd999cc4cc56a621430f87f295d" Dec 03 00:30:51 crc kubenswrapper[4805]: E1203 00:30:51.501237 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10f0159f2291fe39d13c2ecdff0e1ac5f46a1fd999cc4cc56a621430f87f295d\": container with ID starting with 10f0159f2291fe39d13c2ecdff0e1ac5f46a1fd999cc4cc56a621430f87f295d not found: ID does not exist" containerID="10f0159f2291fe39d13c2ecdff0e1ac5f46a1fd999cc4cc56a621430f87f295d" Dec 03 00:30:51 crc kubenswrapper[4805]: I1203 00:30:51.501265 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10f0159f2291fe39d13c2ecdff0e1ac5f46a1fd999cc4cc56a621430f87f295d"} err="failed to get container status \"10f0159f2291fe39d13c2ecdff0e1ac5f46a1fd999cc4cc56a621430f87f295d\": rpc error: code = NotFound desc = could not find container \"10f0159f2291fe39d13c2ecdff0e1ac5f46a1fd999cc4cc56a621430f87f295d\": container with ID starting with 10f0159f2291fe39d13c2ecdff0e1ac5f46a1fd999cc4cc56a621430f87f295d not found: ID does not exist" Dec 03 00:30:51 crc kubenswrapper[4805]: I1203 00:30:51.501289 4805 scope.go:117] "RemoveContainer" containerID="b32aa7269bcda89d61347a835c47074cc08c777723c1587094f5d1b557156fff" Dec 03 00:30:51 crc kubenswrapper[4805]: E1203 00:30:51.501494 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b32aa7269bcda89d61347a835c47074cc08c777723c1587094f5d1b557156fff\": container with ID starting with b32aa7269bcda89d61347a835c47074cc08c777723c1587094f5d1b557156fff not found: ID does not exist" containerID="b32aa7269bcda89d61347a835c47074cc08c777723c1587094f5d1b557156fff" Dec 03 00:30:51 crc kubenswrapper[4805]: I1203 00:30:51.501521 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b32aa7269bcda89d61347a835c47074cc08c777723c1587094f5d1b557156fff"} err="failed to get container status \"b32aa7269bcda89d61347a835c47074cc08c777723c1587094f5d1b557156fff\": rpc error: code = NotFound desc = could not find container \"b32aa7269bcda89d61347a835c47074cc08c777723c1587094f5d1b557156fff\": container with ID starting with b32aa7269bcda89d61347a835c47074cc08c777723c1587094f5d1b557156fff not found: ID does not exist" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.432067 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9e51dfe-06fb-45da-93ad-2ba00ce6d973" path="/var/lib/kubelet/pods/d9e51dfe-06fb-45da-93ad-2ba00ce6d973/volumes" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.687780 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 03 00:30:52 crc kubenswrapper[4805]: E1203 00:30:52.688493 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff1a9d5-5edc-4ecb-8f79-5ac953580aac" containerName="manage-dockerfile" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.688508 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff1a9d5-5edc-4ecb-8f79-5ac953580aac" containerName="manage-dockerfile" Dec 03 00:30:52 crc kubenswrapper[4805]: E1203 00:30:52.688521 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9e51dfe-06fb-45da-93ad-2ba00ce6d973" containerName="extract-content" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.688527 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9e51dfe-06fb-45da-93ad-2ba00ce6d973" containerName="extract-content" Dec 03 00:30:52 crc kubenswrapper[4805]: E1203 00:30:52.688535 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff1a9d5-5edc-4ecb-8f79-5ac953580aac" containerName="docker-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.688544 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff1a9d5-5edc-4ecb-8f79-5ac953580aac" containerName="docker-build" Dec 03 00:30:52 crc kubenswrapper[4805]: E1203 00:30:52.688557 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9e51dfe-06fb-45da-93ad-2ba00ce6d973" containerName="registry-server" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.688563 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9e51dfe-06fb-45da-93ad-2ba00ce6d973" containerName="registry-server" Dec 03 00:30:52 crc kubenswrapper[4805]: E1203 00:30:52.688577 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff1a9d5-5edc-4ecb-8f79-5ac953580aac" containerName="git-clone" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.688583 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff1a9d5-5edc-4ecb-8f79-5ac953580aac" containerName="git-clone" Dec 03 00:30:52 crc kubenswrapper[4805]: E1203 00:30:52.688593 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9e51dfe-06fb-45da-93ad-2ba00ce6d973" containerName="extract-utilities" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.688599 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9e51dfe-06fb-45da-93ad-2ba00ce6d973" containerName="extract-utilities" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.688718 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9e51dfe-06fb-45da-93ad-2ba00ce6d973" containerName="registry-server" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.688728 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ff1a9d5-5edc-4ecb-8f79-5ac953580aac" containerName="docker-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.689373 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.691259 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-fwd7j" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.692978 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-ca" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.693155 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-sys-config" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.693305 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-global-ca" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.711656 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.766925 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-builder-dockercfg-fwd7j-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.767327 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.767423 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.767876 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.768091 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnk2k\" (UniqueName: \"kubernetes.io/projected/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-kube-api-access-hnk2k\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.768234 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.768360 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.768456 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.768599 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-builder-dockercfg-fwd7j-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.768660 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.768683 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.768715 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.869952 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.870324 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.870442 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.870553 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.870626 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnk2k\" (UniqueName: \"kubernetes.io/projected/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-kube-api-access-hnk2k\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.870696 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.870765 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.870845 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.870928 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-builder-dockercfg-fwd7j-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.870977 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.871101 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.871192 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.871316 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.871421 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-builder-dockercfg-fwd7j-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.871485 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.870933 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.871340 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.871889 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.871964 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.872586 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.872995 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.878000 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-builder-dockercfg-fwd7j-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.878663 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-builder-dockercfg-fwd7j-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:52 crc kubenswrapper[4805]: I1203 00:30:52.889573 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnk2k\" (UniqueName: \"kubernetes.io/projected/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-kube-api-access-hnk2k\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:53 crc kubenswrapper[4805]: I1203 00:30:53.007623 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:30:53 crc kubenswrapper[4805]: I1203 00:30:53.249291 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 03 00:30:53 crc kubenswrapper[4805]: I1203 00:30:53.451726 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"7508b5ac-7ccf-43c9-bcc0-bf3a72364328","Type":"ContainerStarted","Data":"4eb7d2080bb6473de594b8854230179f5db2c57c92b741507602c666c0051bd1"} Dec 03 00:30:54 crc kubenswrapper[4805]: I1203 00:30:54.460309 4805 generic.go:334] "Generic (PLEG): container finished" podID="7508b5ac-7ccf-43c9-bcc0-bf3a72364328" containerID="fe628ec4576d16a5e82eb98d78bcb3fdf1dc363ae27e167d8bb9ab33ea70df8b" exitCode=0 Dec 03 00:30:54 crc kubenswrapper[4805]: I1203 00:30:54.460366 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"7508b5ac-7ccf-43c9-bcc0-bf3a72364328","Type":"ContainerDied","Data":"fe628ec4576d16a5e82eb98d78bcb3fdf1dc363ae27e167d8bb9ab33ea70df8b"} Dec 03 00:30:55 crc kubenswrapper[4805]: I1203 00:30:55.467866 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"7508b5ac-7ccf-43c9-bcc0-bf3a72364328","Type":"ContainerStarted","Data":"397997bfd91f6160c65dd058812f63a98fefe2e08cea51ccee3e503c35b92a36"} Dec 03 00:30:55 crc kubenswrapper[4805]: I1203 00:30:55.496275 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=3.496247127 podStartE2EDuration="3.496247127s" podCreationTimestamp="2025-12-03 00:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:30:55.488138309 +0000 UTC m=+1479.337100935" watchObservedRunningTime="2025-12-03 00:30:55.496247127 +0000 UTC m=+1479.345209733" Dec 03 00:31:02 crc kubenswrapper[4805]: I1203 00:31:02.755980 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 03 00:31:02 crc kubenswrapper[4805]: I1203 00:31:02.756984 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="7508b5ac-7ccf-43c9-bcc0-bf3a72364328" containerName="docker-build" containerID="cri-o://397997bfd91f6160c65dd058812f63a98fefe2e08cea51ccee3e503c35b92a36" gracePeriod=30 Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.523148 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_7508b5ac-7ccf-43c9-bcc0-bf3a72364328/docker-build/0.log" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.524172 4805 generic.go:334] "Generic (PLEG): container finished" podID="7508b5ac-7ccf-43c9-bcc0-bf3a72364328" containerID="397997bfd91f6160c65dd058812f63a98fefe2e08cea51ccee3e503c35b92a36" exitCode=1 Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.524238 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"7508b5ac-7ccf-43c9-bcc0-bf3a72364328","Type":"ContainerDied","Data":"397997bfd91f6160c65dd058812f63a98fefe2e08cea51ccee3e503c35b92a36"} Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.624494 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_7508b5ac-7ccf-43c9-bcc0-bf3a72364328/docker-build/0.log" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.625302 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.825953 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-system-configs\") pod \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.826080 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-container-storage-root\") pod \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.826171 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-container-storage-run\") pod \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.826228 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnk2k\" (UniqueName: \"kubernetes.io/projected/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-kube-api-access-hnk2k\") pod \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.826260 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-buildcachedir\") pod \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.826314 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-builder-dockercfg-fwd7j-push\") pod \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.826355 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-buildworkdir\") pod \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.826390 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-ca-bundles\") pod \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.826427 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "7508b5ac-7ccf-43c9-bcc0-bf3a72364328" (UID: "7508b5ac-7ccf-43c9-bcc0-bf3a72364328"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.826625 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-builder-dockercfg-fwd7j-pull\") pod \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.826663 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-node-pullsecrets\") pod \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.826716 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-blob-cache\") pod \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.826763 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-proxy-ca-bundles\") pod \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\" (UID: \"7508b5ac-7ccf-43c9-bcc0-bf3a72364328\") " Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.826797 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "7508b5ac-7ccf-43c9-bcc0-bf3a72364328" (UID: "7508b5ac-7ccf-43c9-bcc0-bf3a72364328"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.826981 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "7508b5ac-7ccf-43c9-bcc0-bf3a72364328" (UID: "7508b5ac-7ccf-43c9-bcc0-bf3a72364328"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.827212 4805 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.827235 4805 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.827247 4805 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.827251 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "7508b5ac-7ccf-43c9-bcc0-bf3a72364328" (UID: "7508b5ac-7ccf-43c9-bcc0-bf3a72364328"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.827869 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "7508b5ac-7ccf-43c9-bcc0-bf3a72364328" (UID: "7508b5ac-7ccf-43c9-bcc0-bf3a72364328"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.827928 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "7508b5ac-7ccf-43c9-bcc0-bf3a72364328" (UID: "7508b5ac-7ccf-43c9-bcc0-bf3a72364328"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.828080 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "7508b5ac-7ccf-43c9-bcc0-bf3a72364328" (UID: "7508b5ac-7ccf-43c9-bcc0-bf3a72364328"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.832613 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-builder-dockercfg-fwd7j-pull" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-pull") pod "7508b5ac-7ccf-43c9-bcc0-bf3a72364328" (UID: "7508b5ac-7ccf-43c9-bcc0-bf3a72364328"). InnerVolumeSpecName "builder-dockercfg-fwd7j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.834213 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-builder-dockercfg-fwd7j-push" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-push") pod "7508b5ac-7ccf-43c9-bcc0-bf3a72364328" (UID: "7508b5ac-7ccf-43c9-bcc0-bf3a72364328"). InnerVolumeSpecName "builder-dockercfg-fwd7j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.834980 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-kube-api-access-hnk2k" (OuterVolumeSpecName: "kube-api-access-hnk2k") pod "7508b5ac-7ccf-43c9-bcc0-bf3a72364328" (UID: "7508b5ac-7ccf-43c9-bcc0-bf3a72364328"). InnerVolumeSpecName "kube-api-access-hnk2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.886653 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "7508b5ac-7ccf-43c9-bcc0-bf3a72364328" (UID: "7508b5ac-7ccf-43c9-bcc0-bf3a72364328"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.928210 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.928240 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnk2k\" (UniqueName: \"kubernetes.io/projected/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-kube-api-access-hnk2k\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.928252 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-builder-dockercfg-fwd7j-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.928269 4805 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.928283 4805 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.928294 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-builder-dockercfg-fwd7j-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.928304 4805 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:03 crc kubenswrapper[4805]: I1203 00:31:03.928317 4805 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.128684 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "7508b5ac-7ccf-43c9-bcc0-bf3a72364328" (UID: "7508b5ac-7ccf-43c9-bcc0-bf3a72364328"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.131551 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7508b5ac-7ccf-43c9-bcc0-bf3a72364328-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.365072 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Dec 03 00:31:04 crc kubenswrapper[4805]: E1203 00:31:04.365430 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7508b5ac-7ccf-43c9-bcc0-bf3a72364328" containerName="manage-dockerfile" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.365455 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7508b5ac-7ccf-43c9-bcc0-bf3a72364328" containerName="manage-dockerfile" Dec 03 00:31:04 crc kubenswrapper[4805]: E1203 00:31:04.365486 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7508b5ac-7ccf-43c9-bcc0-bf3a72364328" containerName="docker-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.365495 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7508b5ac-7ccf-43c9-bcc0-bf3a72364328" containerName="docker-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.365646 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7508b5ac-7ccf-43c9-bcc0-bf3a72364328" containerName="docker-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.366673 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.370441 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-ca" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.370824 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-sys-config" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.371011 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-global-ca" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.378803 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.534247 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_7508b5ac-7ccf-43c9-bcc0-bf3a72364328/docker-build/0.log" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.534820 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"7508b5ac-7ccf-43c9-bcc0-bf3a72364328","Type":"ContainerDied","Data":"4eb7d2080bb6473de594b8854230179f5db2c57c92b741507602c666c0051bd1"} Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.534871 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.534908 4805 scope.go:117] "RemoveContainer" containerID="397997bfd91f6160c65dd058812f63a98fefe2e08cea51ccee3e503c35b92a36" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.535754 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8b9ba29d-c049-4305-8b24-753655b00e8b-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.535816 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8b9ba29d-c049-4305-8b24-753655b00e8b-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.536021 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b9ba29d-c049-4305-8b24-753655b00e8b-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.536123 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.536273 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/8b9ba29d-c049-4305-8b24-753655b00e8b-builder-dockercfg-fwd7j-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.536415 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k55qh\" (UniqueName: \"kubernetes.io/projected/8b9ba29d-c049-4305-8b24-753655b00e8b-kube-api-access-k55qh\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.536450 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.536486 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.536512 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/8b9ba29d-c049-4305-8b24-753655b00e8b-builder-dockercfg-fwd7j-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.536538 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b9ba29d-c049-4305-8b24-753655b00e8b-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.536566 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8b9ba29d-c049-4305-8b24-753655b00e8b-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.536611 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.568073 4805 scope.go:117] "RemoveContainer" containerID="fe628ec4576d16a5e82eb98d78bcb3fdf1dc363ae27e167d8bb9ab33ea70df8b" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.574672 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.583014 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.639131 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k55qh\" (UniqueName: \"kubernetes.io/projected/8b9ba29d-c049-4305-8b24-753655b00e8b-kube-api-access-k55qh\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.639507 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.639724 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.639897 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/8b9ba29d-c049-4305-8b24-753655b00e8b-builder-dockercfg-fwd7j-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.640005 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.640184 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b9ba29d-c049-4305-8b24-753655b00e8b-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.640417 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8b9ba29d-c049-4305-8b24-753655b00e8b-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.640611 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.640794 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8b9ba29d-c049-4305-8b24-753655b00e8b-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.640913 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8b9ba29d-c049-4305-8b24-753655b00e8b-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.640899 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b9ba29d-c049-4305-8b24-753655b00e8b-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.640316 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.641002 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.641030 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8b9ba29d-c049-4305-8b24-753655b00e8b-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.641408 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8b9ba29d-c049-4305-8b24-753655b00e8b-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.641599 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b9ba29d-c049-4305-8b24-753655b00e8b-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.641774 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.641965 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/8b9ba29d-c049-4305-8b24-753655b00e8b-builder-dockercfg-fwd7j-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.641508 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8b9ba29d-c049-4305-8b24-753655b00e8b-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.642373 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.642485 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b9ba29d-c049-4305-8b24-753655b00e8b-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.645663 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/8b9ba29d-c049-4305-8b24-753655b00e8b-builder-dockercfg-fwd7j-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.645741 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/8b9ba29d-c049-4305-8b24-753655b00e8b-builder-dockercfg-fwd7j-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.664438 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k55qh\" (UniqueName: \"kubernetes.io/projected/8b9ba29d-c049-4305-8b24-753655b00e8b-kube-api-access-k55qh\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.739546 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:31:04 crc kubenswrapper[4805]: I1203 00:31:04.959076 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Dec 03 00:31:05 crc kubenswrapper[4805]: I1203 00:31:05.558907 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"8b9ba29d-c049-4305-8b24-753655b00e8b","Type":"ContainerStarted","Data":"22f5e1da4d0b91ab6d72f27af9c5b121e89d4174074d9f65427b97ae71e3708c"} Dec 03 00:31:06 crc kubenswrapper[4805]: I1203 00:31:06.438016 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7508b5ac-7ccf-43c9-bcc0-bf3a72364328" path="/var/lib/kubelet/pods/7508b5ac-7ccf-43c9-bcc0-bf3a72364328/volumes" Dec 03 00:31:06 crc kubenswrapper[4805]: I1203 00:31:06.569022 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"8b9ba29d-c049-4305-8b24-753655b00e8b","Type":"ContainerStarted","Data":"b1ca66c8820192b8b35363f8a5d7bcd06e70a0d37380e27266e206e28179059e"} Dec 03 00:31:07 crc kubenswrapper[4805]: I1203 00:31:07.580662 4805 generic.go:334] "Generic (PLEG): container finished" podID="8b9ba29d-c049-4305-8b24-753655b00e8b" containerID="b1ca66c8820192b8b35363f8a5d7bcd06e70a0d37380e27266e206e28179059e" exitCode=0 Dec 03 00:31:07 crc kubenswrapper[4805]: I1203 00:31:07.580784 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"8b9ba29d-c049-4305-8b24-753655b00e8b","Type":"ContainerDied","Data":"b1ca66c8820192b8b35363f8a5d7bcd06e70a0d37380e27266e206e28179059e"} Dec 03 00:31:08 crc kubenswrapper[4805]: I1203 00:31:08.592236 4805 generic.go:334] "Generic (PLEG): container finished" podID="8b9ba29d-c049-4305-8b24-753655b00e8b" containerID="6dfac64f638f5fd28280f932f096a35ac7e4d0e8410243497276aac1a7b7ad9b" exitCode=0 Dec 03 00:31:08 crc kubenswrapper[4805]: I1203 00:31:08.592331 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"8b9ba29d-c049-4305-8b24-753655b00e8b","Type":"ContainerDied","Data":"6dfac64f638f5fd28280f932f096a35ac7e4d0e8410243497276aac1a7b7ad9b"} Dec 03 00:31:08 crc kubenswrapper[4805]: I1203 00:31:08.652814 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_8b9ba29d-c049-4305-8b24-753655b00e8b/manage-dockerfile/0.log" Dec 03 00:31:09 crc kubenswrapper[4805]: I1203 00:31:09.602171 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"8b9ba29d-c049-4305-8b24-753655b00e8b","Type":"ContainerStarted","Data":"8b2cccb7a5f2fb353eea31fff51bcdee93ac368a6933e77780d4a582a684e215"} Dec 03 00:31:09 crc kubenswrapper[4805]: I1203 00:31:09.636488 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=5.636466587 podStartE2EDuration="5.636466587s" podCreationTimestamp="2025-12-03 00:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:31:09.631880417 +0000 UTC m=+1493.480843053" watchObservedRunningTime="2025-12-03 00:31:09.636466587 +0000 UTC m=+1493.485429183" Dec 03 00:32:08 crc kubenswrapper[4805]: I1203 00:32:08.000235 4805 generic.go:334] "Generic (PLEG): container finished" podID="8b9ba29d-c049-4305-8b24-753655b00e8b" containerID="8b2cccb7a5f2fb353eea31fff51bcdee93ac368a6933e77780d4a582a684e215" exitCode=0 Dec 03 00:32:08 crc kubenswrapper[4805]: I1203 00:32:08.000324 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"8b9ba29d-c049-4305-8b24-753655b00e8b","Type":"ContainerDied","Data":"8b2cccb7a5f2fb353eea31fff51bcdee93ac368a6933e77780d4a582a684e215"} Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.289351 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.439805 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8b9ba29d-c049-4305-8b24-753655b00e8b-node-pullsecrets\") pod \"8b9ba29d-c049-4305-8b24-753655b00e8b\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.439883 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-container-storage-root\") pod \"8b9ba29d-c049-4305-8b24-753655b00e8b\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.439934 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8b9ba29d-c049-4305-8b24-753655b00e8b-build-system-configs\") pod \"8b9ba29d-c049-4305-8b24-753655b00e8b\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.439926 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b9ba29d-c049-4305-8b24-753655b00e8b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "8b9ba29d-c049-4305-8b24-753655b00e8b" (UID: "8b9ba29d-c049-4305-8b24-753655b00e8b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.439968 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-container-storage-run\") pod \"8b9ba29d-c049-4305-8b24-753655b00e8b\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.439991 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b9ba29d-c049-4305-8b24-753655b00e8b-build-proxy-ca-bundles\") pod \"8b9ba29d-c049-4305-8b24-753655b00e8b\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.440034 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b9ba29d-c049-4305-8b24-753655b00e8b-build-ca-bundles\") pod \"8b9ba29d-c049-4305-8b24-753655b00e8b\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.440070 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k55qh\" (UniqueName: \"kubernetes.io/projected/8b9ba29d-c049-4305-8b24-753655b00e8b-kube-api-access-k55qh\") pod \"8b9ba29d-c049-4305-8b24-753655b00e8b\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.440111 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8b9ba29d-c049-4305-8b24-753655b00e8b-buildcachedir\") pod \"8b9ba29d-c049-4305-8b24-753655b00e8b\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.440139 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/8b9ba29d-c049-4305-8b24-753655b00e8b-builder-dockercfg-fwd7j-pull\") pod \"8b9ba29d-c049-4305-8b24-753655b00e8b\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.440161 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-buildworkdir\") pod \"8b9ba29d-c049-4305-8b24-753655b00e8b\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.440184 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/8b9ba29d-c049-4305-8b24-753655b00e8b-builder-dockercfg-fwd7j-push\") pod \"8b9ba29d-c049-4305-8b24-753655b00e8b\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.440275 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b9ba29d-c049-4305-8b24-753655b00e8b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "8b9ba29d-c049-4305-8b24-753655b00e8b" (UID: "8b9ba29d-c049-4305-8b24-753655b00e8b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.440345 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-build-blob-cache\") pod \"8b9ba29d-c049-4305-8b24-753655b00e8b\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.440752 4805 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8b9ba29d-c049-4305-8b24-753655b00e8b-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.440772 4805 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8b9ba29d-c049-4305-8b24-753655b00e8b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.440980 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b9ba29d-c049-4305-8b24-753655b00e8b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "8b9ba29d-c049-4305-8b24-753655b00e8b" (UID: "8b9ba29d-c049-4305-8b24-753655b00e8b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.441531 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b9ba29d-c049-4305-8b24-753655b00e8b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "8b9ba29d-c049-4305-8b24-753655b00e8b" (UID: "8b9ba29d-c049-4305-8b24-753655b00e8b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.444080 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "8b9ba29d-c049-4305-8b24-753655b00e8b" (UID: "8b9ba29d-c049-4305-8b24-753655b00e8b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.444237 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "8b9ba29d-c049-4305-8b24-753655b00e8b" (UID: "8b9ba29d-c049-4305-8b24-753655b00e8b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.444251 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b9ba29d-c049-4305-8b24-753655b00e8b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "8b9ba29d-c049-4305-8b24-753655b00e8b" (UID: "8b9ba29d-c049-4305-8b24-753655b00e8b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.453378 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9ba29d-c049-4305-8b24-753655b00e8b-builder-dockercfg-fwd7j-push" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-push") pod "8b9ba29d-c049-4305-8b24-753655b00e8b" (UID: "8b9ba29d-c049-4305-8b24-753655b00e8b"). InnerVolumeSpecName "builder-dockercfg-fwd7j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.465876 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9ba29d-c049-4305-8b24-753655b00e8b-builder-dockercfg-fwd7j-pull" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-pull") pod "8b9ba29d-c049-4305-8b24-753655b00e8b" (UID: "8b9ba29d-c049-4305-8b24-753655b00e8b"). InnerVolumeSpecName "builder-dockercfg-fwd7j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.468139 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b9ba29d-c049-4305-8b24-753655b00e8b-kube-api-access-k55qh" (OuterVolumeSpecName: "kube-api-access-k55qh") pod "8b9ba29d-c049-4305-8b24-753655b00e8b" (UID: "8b9ba29d-c049-4305-8b24-753655b00e8b"). InnerVolumeSpecName "kube-api-access-k55qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.541510 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "8b9ba29d-c049-4305-8b24-753655b00e8b" (UID: "8b9ba29d-c049-4305-8b24-753655b00e8b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.542490 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-build-blob-cache\") pod \"8b9ba29d-c049-4305-8b24-753655b00e8b\" (UID: \"8b9ba29d-c049-4305-8b24-753655b00e8b\") " Dec 03 00:32:09 crc kubenswrapper[4805]: W1203 00:32:09.542636 4805 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8b9ba29d-c049-4305-8b24-753655b00e8b/volumes/kubernetes.io~empty-dir/build-blob-cache Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.542657 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "8b9ba29d-c049-4305-8b24-753655b00e8b" (UID: "8b9ba29d-c049-4305-8b24-753655b00e8b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.543107 4805 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8b9ba29d-c049-4305-8b24-753655b00e8b-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.543235 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.543322 4805 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b9ba29d-c049-4305-8b24-753655b00e8b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.543395 4805 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b9ba29d-c049-4305-8b24-753655b00e8b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.543468 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k55qh\" (UniqueName: \"kubernetes.io/projected/8b9ba29d-c049-4305-8b24-753655b00e8b-kube-api-access-k55qh\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.543541 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/8b9ba29d-c049-4305-8b24-753655b00e8b-builder-dockercfg-fwd7j-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.543612 4805 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.543694 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/8b9ba29d-c049-4305-8b24-753655b00e8b-builder-dockercfg-fwd7j-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:09 crc kubenswrapper[4805]: I1203 00:32:09.543767 4805 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:10 crc kubenswrapper[4805]: I1203 00:32:10.016796 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"8b9ba29d-c049-4305-8b24-753655b00e8b","Type":"ContainerDied","Data":"22f5e1da4d0b91ab6d72f27af9c5b121e89d4174074d9f65427b97ae71e3708c"} Dec 03 00:32:10 crc kubenswrapper[4805]: I1203 00:32:10.016873 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22f5e1da4d0b91ab6d72f27af9c5b121e89d4174074d9f65427b97ae71e3708c" Dec 03 00:32:10 crc kubenswrapper[4805]: I1203 00:32:10.016933 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 03 00:32:10 crc kubenswrapper[4805]: I1203 00:32:10.220661 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "8b9ba29d-c049-4305-8b24-753655b00e8b" (UID: "8b9ba29d-c049-4305-8b24-753655b00e8b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:10 crc kubenswrapper[4805]: I1203 00:32:10.253767 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8b9ba29d-c049-4305-8b24-753655b00e8b-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.170588 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Dec 03 00:32:19 crc kubenswrapper[4805]: E1203 00:32:19.171673 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b9ba29d-c049-4305-8b24-753655b00e8b" containerName="docker-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.171698 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b9ba29d-c049-4305-8b24-753655b00e8b" containerName="docker-build" Dec 03 00:32:19 crc kubenswrapper[4805]: E1203 00:32:19.171714 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b9ba29d-c049-4305-8b24-753655b00e8b" containerName="git-clone" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.171725 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b9ba29d-c049-4305-8b24-753655b00e8b" containerName="git-clone" Dec 03 00:32:19 crc kubenswrapper[4805]: E1203 00:32:19.171750 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b9ba29d-c049-4305-8b24-753655b00e8b" containerName="manage-dockerfile" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.171763 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b9ba29d-c049-4305-8b24-753655b00e8b" containerName="manage-dockerfile" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.171955 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b9ba29d-c049-4305-8b24-753655b00e8b" containerName="docker-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.173078 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.175409 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-fwd7j" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.175550 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-1-global-ca" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.175567 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-1-sys-config" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.177829 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-1-ca" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.194027 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.300029 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/5871a4e0-32eb-47af-86f0-2e59508ff814-builder-dockercfg-fwd7j-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.300087 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5871a4e0-32eb-47af-86f0-2e59508ff814-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.300123 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5871a4e0-32eb-47af-86f0-2e59508ff814-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.300154 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.300173 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/5871a4e0-32eb-47af-86f0-2e59508ff814-builder-dockercfg-fwd7j-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.300306 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.300362 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5871a4e0-32eb-47af-86f0-2e59508ff814-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.300488 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5871a4e0-32eb-47af-86f0-2e59508ff814-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.300528 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjt6d\" (UniqueName: \"kubernetes.io/projected/5871a4e0-32eb-47af-86f0-2e59508ff814-kube-api-access-xjt6d\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.300559 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5871a4e0-32eb-47af-86f0-2e59508ff814-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.300578 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.300614 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.401875 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.401922 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5871a4e0-32eb-47af-86f0-2e59508ff814-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.401967 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5871a4e0-32eb-47af-86f0-2e59508ff814-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.401995 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjt6d\" (UniqueName: \"kubernetes.io/projected/5871a4e0-32eb-47af-86f0-2e59508ff814-kube-api-access-xjt6d\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.402024 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5871a4e0-32eb-47af-86f0-2e59508ff814-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.402042 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.402061 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.402083 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5871a4e0-32eb-47af-86f0-2e59508ff814-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.402089 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/5871a4e0-32eb-47af-86f0-2e59508ff814-builder-dockercfg-fwd7j-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.402146 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5871a4e0-32eb-47af-86f0-2e59508ff814-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.402161 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5871a4e0-32eb-47af-86f0-2e59508ff814-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.402244 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5871a4e0-32eb-47af-86f0-2e59508ff814-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.402301 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.402326 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/5871a4e0-32eb-47af-86f0-2e59508ff814-builder-dockercfg-fwd7j-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.402419 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.403026 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5871a4e0-32eb-47af-86f0-2e59508ff814-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.403226 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.403445 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5871a4e0-32eb-47af-86f0-2e59508ff814-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.403534 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5871a4e0-32eb-47af-86f0-2e59508ff814-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.403603 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.403760 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.408421 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/5871a4e0-32eb-47af-86f0-2e59508ff814-builder-dockercfg-fwd7j-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.408779 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/5871a4e0-32eb-47af-86f0-2e59508ff814-builder-dockercfg-fwd7j-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.432747 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjt6d\" (UniqueName: \"kubernetes.io/projected/5871a4e0-32eb-47af-86f0-2e59508ff814-kube-api-access-xjt6d\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.498998 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:19 crc kubenswrapper[4805]: I1203 00:32:19.709352 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Dec 03 00:32:19 crc kubenswrapper[4805]: W1203 00:32:19.720361 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5871a4e0_32eb_47af_86f0_2e59508ff814.slice/crio-735a7e3a7ec3edb994491c6d6fce3a73a2c8f08f7f602a1c7014792132f48452 WatchSource:0}: Error finding container 735a7e3a7ec3edb994491c6d6fce3a73a2c8f08f7f602a1c7014792132f48452: Status 404 returned error can't find the container with id 735a7e3a7ec3edb994491c6d6fce3a73a2c8f08f7f602a1c7014792132f48452 Dec 03 00:32:20 crc kubenswrapper[4805]: I1203 00:32:20.086228 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"5871a4e0-32eb-47af-86f0-2e59508ff814","Type":"ContainerStarted","Data":"735a7e3a7ec3edb994491c6d6fce3a73a2c8f08f7f602a1c7014792132f48452"} Dec 03 00:32:21 crc kubenswrapper[4805]: I1203 00:32:21.094857 4805 generic.go:334] "Generic (PLEG): container finished" podID="5871a4e0-32eb-47af-86f0-2e59508ff814" containerID="4987839dcf54ccd09914c177bbdad95e70be0e54a47164101edff1a4fa30f2ad" exitCode=0 Dec 03 00:32:21 crc kubenswrapper[4805]: I1203 00:32:21.094922 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"5871a4e0-32eb-47af-86f0-2e59508ff814","Type":"ContainerDied","Data":"4987839dcf54ccd09914c177bbdad95e70be0e54a47164101edff1a4fa30f2ad"} Dec 03 00:32:22 crc kubenswrapper[4805]: I1203 00:32:22.104149 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_5871a4e0-32eb-47af-86f0-2e59508ff814/docker-build/0.log" Dec 03 00:32:22 crc kubenswrapper[4805]: I1203 00:32:22.105060 4805 generic.go:334] "Generic (PLEG): container finished" podID="5871a4e0-32eb-47af-86f0-2e59508ff814" containerID="5da6bbe383954c119ef39f60cd9874465ac32b7c4bc9e2b313306d6635743253" exitCode=1 Dec 03 00:32:22 crc kubenswrapper[4805]: I1203 00:32:22.105144 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"5871a4e0-32eb-47af-86f0-2e59508ff814","Type":"ContainerDied","Data":"5da6bbe383954c119ef39f60cd9874465ac32b7c4bc9e2b313306d6635743253"} Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.359708 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_5871a4e0-32eb-47af-86f0-2e59508ff814/docker-build/0.log" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.360672 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.565322 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5871a4e0-32eb-47af-86f0-2e59508ff814-build-ca-bundles\") pod \"5871a4e0-32eb-47af-86f0-2e59508ff814\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.565395 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-build-blob-cache\") pod \"5871a4e0-32eb-47af-86f0-2e59508ff814\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.565487 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5871a4e0-32eb-47af-86f0-2e59508ff814-node-pullsecrets\") pod \"5871a4e0-32eb-47af-86f0-2e59508ff814\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.565507 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5871a4e0-32eb-47af-86f0-2e59508ff814-buildcachedir\") pod \"5871a4e0-32eb-47af-86f0-2e59508ff814\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.565554 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5871a4e0-32eb-47af-86f0-2e59508ff814-build-system-configs\") pod \"5871a4e0-32eb-47af-86f0-2e59508ff814\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.565576 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-container-storage-run\") pod \"5871a4e0-32eb-47af-86f0-2e59508ff814\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.565615 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-container-storage-root\") pod \"5871a4e0-32eb-47af-86f0-2e59508ff814\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.565647 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjt6d\" (UniqueName: \"kubernetes.io/projected/5871a4e0-32eb-47af-86f0-2e59508ff814-kube-api-access-xjt6d\") pod \"5871a4e0-32eb-47af-86f0-2e59508ff814\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.565646 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5871a4e0-32eb-47af-86f0-2e59508ff814-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5871a4e0-32eb-47af-86f0-2e59508ff814" (UID: "5871a4e0-32eb-47af-86f0-2e59508ff814"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.565711 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-buildworkdir\") pod \"5871a4e0-32eb-47af-86f0-2e59508ff814\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.565756 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5871a4e0-32eb-47af-86f0-2e59508ff814-build-proxy-ca-bundles\") pod \"5871a4e0-32eb-47af-86f0-2e59508ff814\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.565782 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/5871a4e0-32eb-47af-86f0-2e59508ff814-builder-dockercfg-fwd7j-push\") pod \"5871a4e0-32eb-47af-86f0-2e59508ff814\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.565804 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/5871a4e0-32eb-47af-86f0-2e59508ff814-builder-dockercfg-fwd7j-pull\") pod \"5871a4e0-32eb-47af-86f0-2e59508ff814\" (UID: \"5871a4e0-32eb-47af-86f0-2e59508ff814\") " Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.565865 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5871a4e0-32eb-47af-86f0-2e59508ff814-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "5871a4e0-32eb-47af-86f0-2e59508ff814" (UID: "5871a4e0-32eb-47af-86f0-2e59508ff814"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.565977 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "5871a4e0-32eb-47af-86f0-2e59508ff814" (UID: "5871a4e0-32eb-47af-86f0-2e59508ff814"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.566046 4805 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5871a4e0-32eb-47af-86f0-2e59508ff814-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.566073 4805 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5871a4e0-32eb-47af-86f0-2e59508ff814-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.566350 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "5871a4e0-32eb-47af-86f0-2e59508ff814" (UID: "5871a4e0-32eb-47af-86f0-2e59508ff814"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.566888 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5871a4e0-32eb-47af-86f0-2e59508ff814-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "5871a4e0-32eb-47af-86f0-2e59508ff814" (UID: "5871a4e0-32eb-47af-86f0-2e59508ff814"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.566911 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "5871a4e0-32eb-47af-86f0-2e59508ff814" (UID: "5871a4e0-32eb-47af-86f0-2e59508ff814"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.567506 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5871a4e0-32eb-47af-86f0-2e59508ff814-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "5871a4e0-32eb-47af-86f0-2e59508ff814" (UID: "5871a4e0-32eb-47af-86f0-2e59508ff814"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.567606 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5871a4e0-32eb-47af-86f0-2e59508ff814-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "5871a4e0-32eb-47af-86f0-2e59508ff814" (UID: "5871a4e0-32eb-47af-86f0-2e59508ff814"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.568155 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "5871a4e0-32eb-47af-86f0-2e59508ff814" (UID: "5871a4e0-32eb-47af-86f0-2e59508ff814"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.573370 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5871a4e0-32eb-47af-86f0-2e59508ff814-builder-dockercfg-fwd7j-push" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-push") pod "5871a4e0-32eb-47af-86f0-2e59508ff814" (UID: "5871a4e0-32eb-47af-86f0-2e59508ff814"). InnerVolumeSpecName "builder-dockercfg-fwd7j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.574749 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5871a4e0-32eb-47af-86f0-2e59508ff814-kube-api-access-xjt6d" (OuterVolumeSpecName: "kube-api-access-xjt6d") pod "5871a4e0-32eb-47af-86f0-2e59508ff814" (UID: "5871a4e0-32eb-47af-86f0-2e59508ff814"). InnerVolumeSpecName "kube-api-access-xjt6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.579401 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5871a4e0-32eb-47af-86f0-2e59508ff814-builder-dockercfg-fwd7j-pull" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-pull") pod "5871a4e0-32eb-47af-86f0-2e59508ff814" (UID: "5871a4e0-32eb-47af-86f0-2e59508ff814"). InnerVolumeSpecName "builder-dockercfg-fwd7j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.667400 4805 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5871a4e0-32eb-47af-86f0-2e59508ff814-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.667696 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.667710 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.667721 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjt6d\" (UniqueName: \"kubernetes.io/projected/5871a4e0-32eb-47af-86f0-2e59508ff814-kube-api-access-xjt6d\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.667733 4805 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.667744 4805 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5871a4e0-32eb-47af-86f0-2e59508ff814-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.667755 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/5871a4e0-32eb-47af-86f0-2e59508ff814-builder-dockercfg-fwd7j-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.667771 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/5871a4e0-32eb-47af-86f0-2e59508ff814-builder-dockercfg-fwd7j-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.667784 4805 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5871a4e0-32eb-47af-86f0-2e59508ff814-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:23 crc kubenswrapper[4805]: I1203 00:32:23.667798 4805 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5871a4e0-32eb-47af-86f0-2e59508ff814-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:24 crc kubenswrapper[4805]: I1203 00:32:24.122185 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_5871a4e0-32eb-47af-86f0-2e59508ff814/docker-build/0.log" Dec 03 00:32:24 crc kubenswrapper[4805]: I1203 00:32:24.123251 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"5871a4e0-32eb-47af-86f0-2e59508ff814","Type":"ContainerDied","Data":"735a7e3a7ec3edb994491c6d6fce3a73a2c8f08f7f602a1c7014792132f48452"} Dec 03 00:32:24 crc kubenswrapper[4805]: I1203 00:32:24.123300 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="735a7e3a7ec3edb994491c6d6fce3a73a2c8f08f7f602a1c7014792132f48452" Dec 03 00:32:24 crc kubenswrapper[4805]: I1203 00:32:24.123348 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 03 00:32:28 crc kubenswrapper[4805]: I1203 00:32:28.937178 4805 scope.go:117] "RemoveContainer" containerID="c5a5deddf6c3135e8786e7763a3d18aa0176ea3cfc1e84020852b358b05c2f61" Dec 03 00:32:28 crc kubenswrapper[4805]: I1203 00:32:28.965594 4805 scope.go:117] "RemoveContainer" containerID="7ff3efaf039bf5db90b260663af5a9eff98137c545a808fec5d9107420035710" Dec 03 00:32:29 crc kubenswrapper[4805]: I1203 00:32:29.676545 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Dec 03 00:32:29 crc kubenswrapper[4805]: I1203 00:32:29.684528 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Dec 03 00:32:30 crc kubenswrapper[4805]: I1203 00:32:30.434252 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5871a4e0-32eb-47af-86f0-2e59508ff814" path="/var/lib/kubelet/pods/5871a4e0-32eb-47af-86f0-2e59508ff814/volumes" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.305470 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Dec 03 00:32:31 crc kubenswrapper[4805]: E1203 00:32:31.306114 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5871a4e0-32eb-47af-86f0-2e59508ff814" containerName="manage-dockerfile" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.306264 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5871a4e0-32eb-47af-86f0-2e59508ff814" containerName="manage-dockerfile" Dec 03 00:32:31 crc kubenswrapper[4805]: E1203 00:32:31.306366 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5871a4e0-32eb-47af-86f0-2e59508ff814" containerName="docker-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.306449 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5871a4e0-32eb-47af-86f0-2e59508ff814" containerName="docker-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.306669 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5871a4e0-32eb-47af-86f0-2e59508ff814" containerName="docker-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.307893 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.312970 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-2-ca" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.313432 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-fwd7j" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.313857 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-2-sys-config" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.313872 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-2-global-ca" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.340662 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.388588 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc954444-0b06-4b98-805d-b73ec50476b0-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.388892 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.389039 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.389134 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc954444-0b06-4b98-805d-b73ec50476b0-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.389260 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc954444-0b06-4b98-805d-b73ec50476b0-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.389455 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/cc954444-0b06-4b98-805d-b73ec50476b0-builder-dockercfg-fwd7j-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.389573 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cc954444-0b06-4b98-805d-b73ec50476b0-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.389759 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r629\" (UniqueName: \"kubernetes.io/projected/cc954444-0b06-4b98-805d-b73ec50476b0-kube-api-access-5r629\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.389878 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.389935 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cc954444-0b06-4b98-805d-b73ec50476b0-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.389974 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/cc954444-0b06-4b98-805d-b73ec50476b0-builder-dockercfg-fwd7j-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.390031 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.491703 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r629\" (UniqueName: \"kubernetes.io/projected/cc954444-0b06-4b98-805d-b73ec50476b0-kube-api-access-5r629\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.491784 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.491812 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cc954444-0b06-4b98-805d-b73ec50476b0-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.491840 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/cc954444-0b06-4b98-805d-b73ec50476b0-builder-dockercfg-fwd7j-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.491865 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.491901 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc954444-0b06-4b98-805d-b73ec50476b0-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.491939 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.491963 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.491985 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc954444-0b06-4b98-805d-b73ec50476b0-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.492012 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc954444-0b06-4b98-805d-b73ec50476b0-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.492035 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/cc954444-0b06-4b98-805d-b73ec50476b0-builder-dockercfg-fwd7j-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.492058 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cc954444-0b06-4b98-805d-b73ec50476b0-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.492120 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cc954444-0b06-4b98-805d-b73ec50476b0-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.492553 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc954444-0b06-4b98-805d-b73ec50476b0-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.492553 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cc954444-0b06-4b98-805d-b73ec50476b0-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.492602 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.492618 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.492846 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.492972 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.493434 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc954444-0b06-4b98-805d-b73ec50476b0-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.493543 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc954444-0b06-4b98-805d-b73ec50476b0-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.498928 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/cc954444-0b06-4b98-805d-b73ec50476b0-builder-dockercfg-fwd7j-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.500018 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/cc954444-0b06-4b98-805d-b73ec50476b0-builder-dockercfg-fwd7j-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.514577 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r629\" (UniqueName: \"kubernetes.io/projected/cc954444-0b06-4b98-805d-b73ec50476b0-kube-api-access-5r629\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:31 crc kubenswrapper[4805]: I1203 00:32:31.629706 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:32 crc kubenswrapper[4805]: I1203 00:32:32.062571 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Dec 03 00:32:32 crc kubenswrapper[4805]: I1203 00:32:32.189900 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"cc954444-0b06-4b98-805d-b73ec50476b0","Type":"ContainerStarted","Data":"7260a5737cb74d7d02a74864da6d990c9b82652164dc0c002d96895843f183b3"} Dec 03 00:32:33 crc kubenswrapper[4805]: I1203 00:32:33.199507 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"cc954444-0b06-4b98-805d-b73ec50476b0","Type":"ContainerStarted","Data":"e3b403600a4a1426848c8485046807a27c8133e8fa7815241270d51861bcd107"} Dec 03 00:32:33 crc kubenswrapper[4805]: E1203 00:32:33.329247 4805 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.130:57992->38.102.83.130:46439: write tcp 38.102.83.130:57992->38.102.83.130:46439: write: broken pipe Dec 03 00:32:34 crc kubenswrapper[4805]: I1203 00:32:34.208733 4805 generic.go:334] "Generic (PLEG): container finished" podID="cc954444-0b06-4b98-805d-b73ec50476b0" containerID="e3b403600a4a1426848c8485046807a27c8133e8fa7815241270d51861bcd107" exitCode=0 Dec 03 00:32:34 crc kubenswrapper[4805]: I1203 00:32:34.208844 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"cc954444-0b06-4b98-805d-b73ec50476b0","Type":"ContainerDied","Data":"e3b403600a4a1426848c8485046807a27c8133e8fa7815241270d51861bcd107"} Dec 03 00:32:35 crc kubenswrapper[4805]: I1203 00:32:35.218108 4805 generic.go:334] "Generic (PLEG): container finished" podID="cc954444-0b06-4b98-805d-b73ec50476b0" containerID="e26bdd743afe2c9468079a1633f9a336a1c567556e8a5a30ba659a26c2dbfd9c" exitCode=0 Dec 03 00:32:35 crc kubenswrapper[4805]: I1203 00:32:35.218162 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"cc954444-0b06-4b98-805d-b73ec50476b0","Type":"ContainerDied","Data":"e26bdd743afe2c9468079a1633f9a336a1c567556e8a5a30ba659a26c2dbfd9c"} Dec 03 00:32:35 crc kubenswrapper[4805]: I1203 00:32:35.267609 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-2-build_cc954444-0b06-4b98-805d-b73ec50476b0/manage-dockerfile/0.log" Dec 03 00:32:36 crc kubenswrapper[4805]: I1203 00:32:36.225716 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"cc954444-0b06-4b98-805d-b73ec50476b0","Type":"ContainerStarted","Data":"dba82b412852bad02bf3fed6347bb6b3d593028f554177db36514c5bdfb728e3"} Dec 03 00:32:36 crc kubenswrapper[4805]: I1203 00:32:36.268095 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-bundle-2-build" podStartSLOduration=5.268062474 podStartE2EDuration="5.268062474s" podCreationTimestamp="2025-12-03 00:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:32:36.259253803 +0000 UTC m=+1580.108216449" watchObservedRunningTime="2025-12-03 00:32:36.268062474 +0000 UTC m=+1580.117025090" Dec 03 00:32:39 crc kubenswrapper[4805]: I1203 00:32:39.261092 4805 generic.go:334] "Generic (PLEG): container finished" podID="cc954444-0b06-4b98-805d-b73ec50476b0" containerID="dba82b412852bad02bf3fed6347bb6b3d593028f554177db36514c5bdfb728e3" exitCode=0 Dec 03 00:32:39 crc kubenswrapper[4805]: I1203 00:32:39.261336 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"cc954444-0b06-4b98-805d-b73ec50476b0","Type":"ContainerDied","Data":"dba82b412852bad02bf3fed6347bb6b3d593028f554177db36514c5bdfb728e3"} Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.565377 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.648403 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cc954444-0b06-4b98-805d-b73ec50476b0-buildcachedir\") pod \"cc954444-0b06-4b98-805d-b73ec50476b0\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.648465 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-container-storage-run\") pod \"cc954444-0b06-4b98-805d-b73ec50476b0\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.648485 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc954444-0b06-4b98-805d-b73ec50476b0-node-pullsecrets\") pod \"cc954444-0b06-4b98-805d-b73ec50476b0\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.648509 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-buildworkdir\") pod \"cc954444-0b06-4b98-805d-b73ec50476b0\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.648515 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc954444-0b06-4b98-805d-b73ec50476b0-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "cc954444-0b06-4b98-805d-b73ec50476b0" (UID: "cc954444-0b06-4b98-805d-b73ec50476b0"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.648557 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-build-blob-cache\") pod \"cc954444-0b06-4b98-805d-b73ec50476b0\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.648592 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc954444-0b06-4b98-805d-b73ec50476b0-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "cc954444-0b06-4b98-805d-b73ec50476b0" (UID: "cc954444-0b06-4b98-805d-b73ec50476b0"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.648597 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc954444-0b06-4b98-805d-b73ec50476b0-build-ca-bundles\") pod \"cc954444-0b06-4b98-805d-b73ec50476b0\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.648671 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc954444-0b06-4b98-805d-b73ec50476b0-build-proxy-ca-bundles\") pod \"cc954444-0b06-4b98-805d-b73ec50476b0\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.648715 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cc954444-0b06-4b98-805d-b73ec50476b0-build-system-configs\") pod \"cc954444-0b06-4b98-805d-b73ec50476b0\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.648741 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r629\" (UniqueName: \"kubernetes.io/projected/cc954444-0b06-4b98-805d-b73ec50476b0-kube-api-access-5r629\") pod \"cc954444-0b06-4b98-805d-b73ec50476b0\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.648786 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/cc954444-0b06-4b98-805d-b73ec50476b0-builder-dockercfg-fwd7j-push\") pod \"cc954444-0b06-4b98-805d-b73ec50476b0\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.648826 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/cc954444-0b06-4b98-805d-b73ec50476b0-builder-dockercfg-fwd7j-pull\") pod \"cc954444-0b06-4b98-805d-b73ec50476b0\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.648857 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-container-storage-root\") pod \"cc954444-0b06-4b98-805d-b73ec50476b0\" (UID: \"cc954444-0b06-4b98-805d-b73ec50476b0\") " Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.649397 4805 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cc954444-0b06-4b98-805d-b73ec50476b0-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.649425 4805 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc954444-0b06-4b98-805d-b73ec50476b0-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.649501 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc954444-0b06-4b98-805d-b73ec50476b0-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "cc954444-0b06-4b98-805d-b73ec50476b0" (UID: "cc954444-0b06-4b98-805d-b73ec50476b0"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.649843 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc954444-0b06-4b98-805d-b73ec50476b0-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "cc954444-0b06-4b98-805d-b73ec50476b0" (UID: "cc954444-0b06-4b98-805d-b73ec50476b0"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.649883 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc954444-0b06-4b98-805d-b73ec50476b0-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "cc954444-0b06-4b98-805d-b73ec50476b0" (UID: "cc954444-0b06-4b98-805d-b73ec50476b0"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.650352 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "cc954444-0b06-4b98-805d-b73ec50476b0" (UID: "cc954444-0b06-4b98-805d-b73ec50476b0"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.650567 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "cc954444-0b06-4b98-805d-b73ec50476b0" (UID: "cc954444-0b06-4b98-805d-b73ec50476b0"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.651724 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "cc954444-0b06-4b98-805d-b73ec50476b0" (UID: "cc954444-0b06-4b98-805d-b73ec50476b0"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.654743 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc954444-0b06-4b98-805d-b73ec50476b0-builder-dockercfg-fwd7j-push" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-push") pod "cc954444-0b06-4b98-805d-b73ec50476b0" (UID: "cc954444-0b06-4b98-805d-b73ec50476b0"). InnerVolumeSpecName "builder-dockercfg-fwd7j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.654772 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc954444-0b06-4b98-805d-b73ec50476b0-kube-api-access-5r629" (OuterVolumeSpecName: "kube-api-access-5r629") pod "cc954444-0b06-4b98-805d-b73ec50476b0" (UID: "cc954444-0b06-4b98-805d-b73ec50476b0"). InnerVolumeSpecName "kube-api-access-5r629". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.656008 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "cc954444-0b06-4b98-805d-b73ec50476b0" (UID: "cc954444-0b06-4b98-805d-b73ec50476b0"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.658354 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc954444-0b06-4b98-805d-b73ec50476b0-builder-dockercfg-fwd7j-pull" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-pull") pod "cc954444-0b06-4b98-805d-b73ec50476b0" (UID: "cc954444-0b06-4b98-805d-b73ec50476b0"). InnerVolumeSpecName "builder-dockercfg-fwd7j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.751154 4805 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc954444-0b06-4b98-805d-b73ec50476b0-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.751260 4805 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc954444-0b06-4b98-805d-b73ec50476b0-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.751273 4805 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cc954444-0b06-4b98-805d-b73ec50476b0-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.751286 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r629\" (UniqueName: \"kubernetes.io/projected/cc954444-0b06-4b98-805d-b73ec50476b0-kube-api-access-5r629\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.751296 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/cc954444-0b06-4b98-805d-b73ec50476b0-builder-dockercfg-fwd7j-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.751306 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/cc954444-0b06-4b98-805d-b73ec50476b0-builder-dockercfg-fwd7j-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.751314 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.751323 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.751334 4805 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:40 crc kubenswrapper[4805]: I1203 00:32:40.751341 4805 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cc954444-0b06-4b98-805d-b73ec50476b0-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:41 crc kubenswrapper[4805]: I1203 00:32:41.277349 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"cc954444-0b06-4b98-805d-b73ec50476b0","Type":"ContainerDied","Data":"7260a5737cb74d7d02a74864da6d990c9b82652164dc0c002d96895843f183b3"} Dec 03 00:32:41 crc kubenswrapper[4805]: I1203 00:32:41.277396 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7260a5737cb74d7d02a74864da6d990c9b82652164dc0c002d96895843f183b3" Dec 03 00:32:41 crc kubenswrapper[4805]: I1203 00:32:41.277417 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.876474 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Dec 03 00:32:44 crc kubenswrapper[4805]: E1203 00:32:44.877010 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc954444-0b06-4b98-805d-b73ec50476b0" containerName="docker-build" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.877022 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc954444-0b06-4b98-805d-b73ec50476b0" containerName="docker-build" Dec 03 00:32:44 crc kubenswrapper[4805]: E1203 00:32:44.877032 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc954444-0b06-4b98-805d-b73ec50476b0" containerName="git-clone" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.877037 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc954444-0b06-4b98-805d-b73ec50476b0" containerName="git-clone" Dec 03 00:32:44 crc kubenswrapper[4805]: E1203 00:32:44.877054 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc954444-0b06-4b98-805d-b73ec50476b0" containerName="manage-dockerfile" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.877062 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc954444-0b06-4b98-805d-b73ec50476b0" containerName="manage-dockerfile" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.877169 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc954444-0b06-4b98-805d-b73ec50476b0" containerName="docker-build" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.877968 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.880117 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-1-ca" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.880350 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-1-sys-config" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.881965 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-1-global-ca" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.882407 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-fwd7j" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.908020 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-builder-dockercfg-fwd7j-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.908090 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.908124 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmnj8\" (UniqueName: \"kubernetes.io/projected/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-kube-api-access-mmnj8\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.908155 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.908184 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.908230 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.908255 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.908285 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-builder-dockercfg-fwd7j-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.908315 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.908352 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.908380 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.908421 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:44 crc kubenswrapper[4805]: I1203 00:32:44.942757 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.010410 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-builder-dockercfg-fwd7j-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.010553 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.011105 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.011189 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmnj8\" (UniqueName: \"kubernetes.io/projected/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-kube-api-access-mmnj8\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.011637 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.012292 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.012370 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.012715 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.013274 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.012405 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.013382 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.013483 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.013587 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-builder-dockercfg-fwd7j-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.014032 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.014081 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.014127 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.014177 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.014680 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.014825 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.014960 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.015231 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.017703 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-builder-dockercfg-fwd7j-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.021609 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-builder-dockercfg-fwd7j-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.039888 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmnj8\" (UniqueName: \"kubernetes.io/projected/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-kube-api-access-mmnj8\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.197282 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:45 crc kubenswrapper[4805]: I1203 00:32:45.451826 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Dec 03 00:32:46 crc kubenswrapper[4805]: I1203 00:32:46.321454 4805 generic.go:334] "Generic (PLEG): container finished" podID="31fa6ffc-27e1-4bc7-b5bd-2888be82b062" containerID="4b6f5927a4a2bc0b8d1a0cd39416c2ab577e8f5b0b86d0e6a1790e68fec3a075" exitCode=0 Dec 03 00:32:46 crc kubenswrapper[4805]: I1203 00:32:46.321505 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"31fa6ffc-27e1-4bc7-b5bd-2888be82b062","Type":"ContainerDied","Data":"4b6f5927a4a2bc0b8d1a0cd39416c2ab577e8f5b0b86d0e6a1790e68fec3a075"} Dec 03 00:32:46 crc kubenswrapper[4805]: I1203 00:32:46.321538 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"31fa6ffc-27e1-4bc7-b5bd-2888be82b062","Type":"ContainerStarted","Data":"5aa4e778666697238196fc532f62b71a9b943ce1f91a6f26e394495d3d46ca16"} Dec 03 00:32:47 crc kubenswrapper[4805]: I1203 00:32:47.329036 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_31fa6ffc-27e1-4bc7-b5bd-2888be82b062/docker-build/0.log" Dec 03 00:32:47 crc kubenswrapper[4805]: I1203 00:32:47.330074 4805 generic.go:334] "Generic (PLEG): container finished" podID="31fa6ffc-27e1-4bc7-b5bd-2888be82b062" containerID="a0e2e36d4597cdd2784dcfc46cfe1ddc7349fd197f78bfccfdeb8cd75d5c8123" exitCode=1 Dec 03 00:32:47 crc kubenswrapper[4805]: I1203 00:32:47.330118 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"31fa6ffc-27e1-4bc7-b5bd-2888be82b062","Type":"ContainerDied","Data":"a0e2e36d4597cdd2784dcfc46cfe1ddc7349fd197f78bfccfdeb8cd75d5c8123"} Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.632467 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_31fa6ffc-27e1-4bc7-b5bd-2888be82b062/docker-build/0.log" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.633305 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.682244 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-node-pullsecrets\") pod \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.682649 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-buildworkdir\") pod \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.682715 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmnj8\" (UniqueName: \"kubernetes.io/projected/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-kube-api-access-mmnj8\") pod \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.682751 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-proxy-ca-bundles\") pod \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.682408 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "31fa6ffc-27e1-4bc7-b5bd-2888be82b062" (UID: "31fa6ffc-27e1-4bc7-b5bd-2888be82b062"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.682814 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-container-storage-root\") pod \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.682853 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-system-configs\") pod \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.682883 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-container-storage-run\") pod \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.682930 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-blob-cache\") pod \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.682958 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-buildcachedir\") pod \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.683005 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-builder-dockercfg-fwd7j-pull\") pod \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.683041 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-builder-dockercfg-fwd7j-push\") pod \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.683078 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-ca-bundles\") pod \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\" (UID: \"31fa6ffc-27e1-4bc7-b5bd-2888be82b062\") " Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.683139 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "31fa6ffc-27e1-4bc7-b5bd-2888be82b062" (UID: "31fa6ffc-27e1-4bc7-b5bd-2888be82b062"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.683800 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "31fa6ffc-27e1-4bc7-b5bd-2888be82b062" (UID: "31fa6ffc-27e1-4bc7-b5bd-2888be82b062"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.684015 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "31fa6ffc-27e1-4bc7-b5bd-2888be82b062" (UID: "31fa6ffc-27e1-4bc7-b5bd-2888be82b062"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.684104 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "31fa6ffc-27e1-4bc7-b5bd-2888be82b062" (UID: "31fa6ffc-27e1-4bc7-b5bd-2888be82b062"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.684328 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "31fa6ffc-27e1-4bc7-b5bd-2888be82b062" (UID: "31fa6ffc-27e1-4bc7-b5bd-2888be82b062"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.684505 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "31fa6ffc-27e1-4bc7-b5bd-2888be82b062" (UID: "31fa6ffc-27e1-4bc7-b5bd-2888be82b062"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.684680 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "31fa6ffc-27e1-4bc7-b5bd-2888be82b062" (UID: "31fa6ffc-27e1-4bc7-b5bd-2888be82b062"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.684770 4805 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.684791 4805 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.684800 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.684810 4805 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.684819 4805 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.684828 4805 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.684836 4805 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.685368 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "31fa6ffc-27e1-4bc7-b5bd-2888be82b062" (UID: "31fa6ffc-27e1-4bc7-b5bd-2888be82b062"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.690254 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-builder-dockercfg-fwd7j-push" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-push") pod "31fa6ffc-27e1-4bc7-b5bd-2888be82b062" (UID: "31fa6ffc-27e1-4bc7-b5bd-2888be82b062"). InnerVolumeSpecName "builder-dockercfg-fwd7j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.690651 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-kube-api-access-mmnj8" (OuterVolumeSpecName: "kube-api-access-mmnj8") pod "31fa6ffc-27e1-4bc7-b5bd-2888be82b062" (UID: "31fa6ffc-27e1-4bc7-b5bd-2888be82b062"). InnerVolumeSpecName "kube-api-access-mmnj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.690661 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-builder-dockercfg-fwd7j-pull" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-pull") pod "31fa6ffc-27e1-4bc7-b5bd-2888be82b062" (UID: "31fa6ffc-27e1-4bc7-b5bd-2888be82b062"). InnerVolumeSpecName "builder-dockercfg-fwd7j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.786172 4805 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.786245 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmnj8\" (UniqueName: \"kubernetes.io/projected/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-kube-api-access-mmnj8\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.786262 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.786279 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-builder-dockercfg-fwd7j-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:48 crc kubenswrapper[4805]: I1203 00:32:48.786293 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/31fa6ffc-27e1-4bc7-b5bd-2888be82b062-builder-dockercfg-fwd7j-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:32:49 crc kubenswrapper[4805]: I1203 00:32:49.348297 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_31fa6ffc-27e1-4bc7-b5bd-2888be82b062/docker-build/0.log" Dec 03 00:32:49 crc kubenswrapper[4805]: I1203 00:32:49.348791 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"31fa6ffc-27e1-4bc7-b5bd-2888be82b062","Type":"ContainerDied","Data":"5aa4e778666697238196fc532f62b71a9b943ce1f91a6f26e394495d3d46ca16"} Dec 03 00:32:49 crc kubenswrapper[4805]: I1203 00:32:49.348835 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aa4e778666697238196fc532f62b71a9b943ce1f91a6f26e394495d3d46ca16" Dec 03 00:32:49 crc kubenswrapper[4805]: I1203 00:32:49.348853 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 03 00:32:55 crc kubenswrapper[4805]: I1203 00:32:55.906116 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Dec 03 00:32:55 crc kubenswrapper[4805]: I1203 00:32:55.918957 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Dec 03 00:32:56 crc kubenswrapper[4805]: I1203 00:32:56.435789 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa6ffc-27e1-4bc7-b5bd-2888be82b062" path="/var/lib/kubelet/pods/31fa6ffc-27e1-4bc7-b5bd-2888be82b062/volumes" Dec 03 00:32:57 crc kubenswrapper[4805]: I1203 00:32:57.915535 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Dec 03 00:32:57 crc kubenswrapper[4805]: E1203 00:32:57.916306 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31fa6ffc-27e1-4bc7-b5bd-2888be82b062" containerName="manage-dockerfile" Dec 03 00:32:57 crc kubenswrapper[4805]: I1203 00:32:57.916325 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="31fa6ffc-27e1-4bc7-b5bd-2888be82b062" containerName="manage-dockerfile" Dec 03 00:32:57 crc kubenswrapper[4805]: E1203 00:32:57.916340 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31fa6ffc-27e1-4bc7-b5bd-2888be82b062" containerName="docker-build" Dec 03 00:32:57 crc kubenswrapper[4805]: I1203 00:32:57.916347 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="31fa6ffc-27e1-4bc7-b5bd-2888be82b062" containerName="docker-build" Dec 03 00:32:57 crc kubenswrapper[4805]: I1203 00:32:57.916548 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="31fa6ffc-27e1-4bc7-b5bd-2888be82b062" containerName="docker-build" Dec 03 00:32:57 crc kubenswrapper[4805]: I1203 00:32:57.917571 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:57 crc kubenswrapper[4805]: I1203 00:32:57.919883 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-fwd7j" Dec 03 00:32:57 crc kubenswrapper[4805]: I1203 00:32:57.920270 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-2-global-ca" Dec 03 00:32:57 crc kubenswrapper[4805]: I1203 00:32:57.920472 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-2-sys-config" Dec 03 00:32:57 crc kubenswrapper[4805]: I1203 00:32:57.920669 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-2-ca" Dec 03 00:32:57 crc kubenswrapper[4805]: I1203 00:32:57.936545 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.019308 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.019367 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.019392 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/232647fe-de27-44e9-b10c-e0da1e2b2eae-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.019437 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.019458 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/232647fe-de27-44e9-b10c-e0da1e2b2eae-builder-dockercfg-fwd7j-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.019482 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.019499 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.019641 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.019671 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlt6j\" (UniqueName: \"kubernetes.io/projected/232647fe-de27-44e9-b10c-e0da1e2b2eae-kube-api-access-vlt6j\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.019699 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/232647fe-de27-44e9-b10c-e0da1e2b2eae-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.019814 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.019933 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/232647fe-de27-44e9-b10c-e0da1e2b2eae-builder-dockercfg-fwd7j-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.122117 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/232647fe-de27-44e9-b10c-e0da1e2b2eae-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.122182 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.122247 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/232647fe-de27-44e9-b10c-e0da1e2b2eae-builder-dockercfg-fwd7j-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.122296 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.122323 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/232647fe-de27-44e9-b10c-e0da1e2b2eae-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.122345 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.122388 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.122415 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/232647fe-de27-44e9-b10c-e0da1e2b2eae-builder-dockercfg-fwd7j-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.122411 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/232647fe-de27-44e9-b10c-e0da1e2b2eae-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.122448 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.122578 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.122672 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/232647fe-de27-44e9-b10c-e0da1e2b2eae-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.122728 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.122827 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlt6j\" (UniqueName: \"kubernetes.io/projected/232647fe-de27-44e9-b10c-e0da1e2b2eae-kube-api-access-vlt6j\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.122931 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.122946 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.123031 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.123310 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.124040 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.124381 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.125030 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.134828 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/232647fe-de27-44e9-b10c-e0da1e2b2eae-builder-dockercfg-fwd7j-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.137960 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/232647fe-de27-44e9-b10c-e0da1e2b2eae-builder-dockercfg-fwd7j-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.143140 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlt6j\" (UniqueName: \"kubernetes.io/projected/232647fe-de27-44e9-b10c-e0da1e2b2eae-kube-api-access-vlt6j\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.246107 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:32:58 crc kubenswrapper[4805]: I1203 00:32:58.677801 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Dec 03 00:32:59 crc kubenswrapper[4805]: I1203 00:32:59.428129 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"232647fe-de27-44e9-b10c-e0da1e2b2eae","Type":"ContainerStarted","Data":"41255f40649cf157485256a80c67f1d8caac5c51df9d28c4e7aeafe5e4fc3c90"} Dec 03 00:32:59 crc kubenswrapper[4805]: I1203 00:32:59.428608 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"232647fe-de27-44e9-b10c-e0da1e2b2eae","Type":"ContainerStarted","Data":"b0369c81247c79ae5386e60b24402e3f523a2025fe97050b75bdb201e0c0060a"} Dec 03 00:33:00 crc kubenswrapper[4805]: I1203 00:33:00.435140 4805 generic.go:334] "Generic (PLEG): container finished" podID="232647fe-de27-44e9-b10c-e0da1e2b2eae" containerID="41255f40649cf157485256a80c67f1d8caac5c51df9d28c4e7aeafe5e4fc3c90" exitCode=0 Dec 03 00:33:00 crc kubenswrapper[4805]: I1203 00:33:00.435266 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"232647fe-de27-44e9-b10c-e0da1e2b2eae","Type":"ContainerDied","Data":"41255f40649cf157485256a80c67f1d8caac5c51df9d28c4e7aeafe5e4fc3c90"} Dec 03 00:33:01 crc kubenswrapper[4805]: I1203 00:33:01.446067 4805 generic.go:334] "Generic (PLEG): container finished" podID="232647fe-de27-44e9-b10c-e0da1e2b2eae" containerID="5b7e72a67a453dfff98d2935e8e6947d6349e5f32c65a6186e61cff3e55ba450" exitCode=0 Dec 03 00:33:01 crc kubenswrapper[4805]: I1203 00:33:01.446156 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"232647fe-de27-44e9-b10c-e0da1e2b2eae","Type":"ContainerDied","Data":"5b7e72a67a453dfff98d2935e8e6947d6349e5f32c65a6186e61cff3e55ba450"} Dec 03 00:33:01 crc kubenswrapper[4805]: I1203 00:33:01.510781 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-2-build_232647fe-de27-44e9-b10c-e0da1e2b2eae/manage-dockerfile/0.log" Dec 03 00:33:02 crc kubenswrapper[4805]: I1203 00:33:02.456470 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"232647fe-de27-44e9-b10c-e0da1e2b2eae","Type":"ContainerStarted","Data":"8faaad6c98443d67d819cbc19923262dca55dd17ecc39cf8d30682d7d0bcc8a3"} Dec 03 00:33:02 crc kubenswrapper[4805]: I1203 00:33:02.490347 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-bundle-2-build" podStartSLOduration=5.490321081 podStartE2EDuration="5.490321081s" podCreationTimestamp="2025-12-03 00:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:33:02.48647429 +0000 UTC m=+1606.335436926" watchObservedRunningTime="2025-12-03 00:33:02.490321081 +0000 UTC m=+1606.339283687" Dec 03 00:33:04 crc kubenswrapper[4805]: I1203 00:33:04.488424 4805 generic.go:334] "Generic (PLEG): container finished" podID="232647fe-de27-44e9-b10c-e0da1e2b2eae" containerID="8faaad6c98443d67d819cbc19923262dca55dd17ecc39cf8d30682d7d0bcc8a3" exitCode=0 Dec 03 00:33:04 crc kubenswrapper[4805]: I1203 00:33:04.488512 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"232647fe-de27-44e9-b10c-e0da1e2b2eae","Type":"ContainerDied","Data":"8faaad6c98443d67d819cbc19923262dca55dd17ecc39cf8d30682d7d0bcc8a3"} Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.790044 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.838689 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-ca-bundles\") pod \"232647fe-de27-44e9-b10c-e0da1e2b2eae\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.838792 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-container-storage-run\") pod \"232647fe-de27-44e9-b10c-e0da1e2b2eae\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.838840 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/232647fe-de27-44e9-b10c-e0da1e2b2eae-buildcachedir\") pod \"232647fe-de27-44e9-b10c-e0da1e2b2eae\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.838893 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-blob-cache\") pod \"232647fe-de27-44e9-b10c-e0da1e2b2eae\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.838935 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-container-storage-root\") pod \"232647fe-de27-44e9-b10c-e0da1e2b2eae\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.838982 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-system-configs\") pod \"232647fe-de27-44e9-b10c-e0da1e2b2eae\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.839010 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/232647fe-de27-44e9-b10c-e0da1e2b2eae-node-pullsecrets\") pod \"232647fe-de27-44e9-b10c-e0da1e2b2eae\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.839084 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-proxy-ca-bundles\") pod \"232647fe-de27-44e9-b10c-e0da1e2b2eae\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.839111 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/232647fe-de27-44e9-b10c-e0da1e2b2eae-builder-dockercfg-fwd7j-push\") pod \"232647fe-de27-44e9-b10c-e0da1e2b2eae\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.839217 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/232647fe-de27-44e9-b10c-e0da1e2b2eae-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "232647fe-de27-44e9-b10c-e0da1e2b2eae" (UID: "232647fe-de27-44e9-b10c-e0da1e2b2eae"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.839230 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/232647fe-de27-44e9-b10c-e0da1e2b2eae-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "232647fe-de27-44e9-b10c-e0da1e2b2eae" (UID: "232647fe-de27-44e9-b10c-e0da1e2b2eae"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.839659 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "232647fe-de27-44e9-b10c-e0da1e2b2eae" (UID: "232647fe-de27-44e9-b10c-e0da1e2b2eae"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.839723 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "232647fe-de27-44e9-b10c-e0da1e2b2eae" (UID: "232647fe-de27-44e9-b10c-e0da1e2b2eae"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.839138 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlt6j\" (UniqueName: \"kubernetes.io/projected/232647fe-de27-44e9-b10c-e0da1e2b2eae-kube-api-access-vlt6j\") pod \"232647fe-de27-44e9-b10c-e0da1e2b2eae\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.839794 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-buildworkdir\") pod \"232647fe-de27-44e9-b10c-e0da1e2b2eae\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.839820 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/232647fe-de27-44e9-b10c-e0da1e2b2eae-builder-dockercfg-fwd7j-pull\") pod \"232647fe-de27-44e9-b10c-e0da1e2b2eae\" (UID: \"232647fe-de27-44e9-b10c-e0da1e2b2eae\") " Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.840058 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "232647fe-de27-44e9-b10c-e0da1e2b2eae" (UID: "232647fe-de27-44e9-b10c-e0da1e2b2eae"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.840635 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "232647fe-de27-44e9-b10c-e0da1e2b2eae" (UID: "232647fe-de27-44e9-b10c-e0da1e2b2eae"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.841075 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "232647fe-de27-44e9-b10c-e0da1e2b2eae" (UID: "232647fe-de27-44e9-b10c-e0da1e2b2eae"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.841083 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.841150 4805 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/232647fe-de27-44e9-b10c-e0da1e2b2eae-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.841166 4805 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.841179 4805 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/232647fe-de27-44e9-b10c-e0da1e2b2eae-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.841190 4805 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.841220 4805 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.841707 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "232647fe-de27-44e9-b10c-e0da1e2b2eae" (UID: "232647fe-de27-44e9-b10c-e0da1e2b2eae"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.845395 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/232647fe-de27-44e9-b10c-e0da1e2b2eae-builder-dockercfg-fwd7j-push" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-push") pod "232647fe-de27-44e9-b10c-e0da1e2b2eae" (UID: "232647fe-de27-44e9-b10c-e0da1e2b2eae"). InnerVolumeSpecName "builder-dockercfg-fwd7j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.845425 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/232647fe-de27-44e9-b10c-e0da1e2b2eae-kube-api-access-vlt6j" (OuterVolumeSpecName: "kube-api-access-vlt6j") pod "232647fe-de27-44e9-b10c-e0da1e2b2eae" (UID: "232647fe-de27-44e9-b10c-e0da1e2b2eae"). InnerVolumeSpecName "kube-api-access-vlt6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.845772 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "232647fe-de27-44e9-b10c-e0da1e2b2eae" (UID: "232647fe-de27-44e9-b10c-e0da1e2b2eae"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.846337 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/232647fe-de27-44e9-b10c-e0da1e2b2eae-builder-dockercfg-fwd7j-pull" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-pull") pod "232647fe-de27-44e9-b10c-e0da1e2b2eae" (UID: "232647fe-de27-44e9-b10c-e0da1e2b2eae"). InnerVolumeSpecName "builder-dockercfg-fwd7j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.941894 4805 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.941965 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/232647fe-de27-44e9-b10c-e0da1e2b2eae-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.941994 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/232647fe-de27-44e9-b10c-e0da1e2b2eae-builder-dockercfg-fwd7j-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.942059 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/232647fe-de27-44e9-b10c-e0da1e2b2eae-builder-dockercfg-fwd7j-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.942083 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlt6j\" (UniqueName: \"kubernetes.io/projected/232647fe-de27-44e9-b10c-e0da1e2b2eae-kube-api-access-vlt6j\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:05 crc kubenswrapper[4805]: I1203 00:33:05.942105 4805 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/232647fe-de27-44e9-b10c-e0da1e2b2eae-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:33:06 crc kubenswrapper[4805]: I1203 00:33:06.507096 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"232647fe-de27-44e9-b10c-e0da1e2b2eae","Type":"ContainerDied","Data":"b0369c81247c79ae5386e60b24402e3f523a2025fe97050b75bdb201e0c0060a"} Dec 03 00:33:06 crc kubenswrapper[4805]: I1203 00:33:06.507140 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0369c81247c79ae5386e60b24402e3f523a2025fe97050b75bdb201e0c0060a" Dec 03 00:33:06 crc kubenswrapper[4805]: I1203 00:33:06.507367 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 03 00:33:17 crc kubenswrapper[4805]: I1203 00:33:17.811730 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:33:17 crc kubenswrapper[4805]: I1203 00:33:17.812518 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.538970 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 03 00:33:22 crc kubenswrapper[4805]: E1203 00:33:22.539747 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232647fe-de27-44e9-b10c-e0da1e2b2eae" containerName="manage-dockerfile" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.539760 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="232647fe-de27-44e9-b10c-e0da1e2b2eae" containerName="manage-dockerfile" Dec 03 00:33:22 crc kubenswrapper[4805]: E1203 00:33:22.539800 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232647fe-de27-44e9-b10c-e0da1e2b2eae" containerName="docker-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.539808 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="232647fe-de27-44e9-b10c-e0da1e2b2eae" containerName="docker-build" Dec 03 00:33:22 crc kubenswrapper[4805]: E1203 00:33:22.539830 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232647fe-de27-44e9-b10c-e0da1e2b2eae" containerName="git-clone" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.539837 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="232647fe-de27-44e9-b10c-e0da1e2b2eae" containerName="git-clone" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.539958 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="232647fe-de27-44e9-b10c-e0da1e2b2eae" containerName="docker-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.540819 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.547865 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-sys-config" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.548051 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-fwd7j" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.548120 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-global-ca" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.548359 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-ca" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.549035 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.566914 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.605808 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5531fe2-10b7-40a9-8fb7-ec6299352239-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.605892 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.605925 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5531fe2-10b7-40a9-8fb7-ec6299352239-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.605971 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/a5531fe2-10b7-40a9-8fb7-ec6299352239-builder-dockercfg-fwd7j-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.606049 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.606073 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.606092 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.606131 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.606155 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.606175 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhjsm\" (UniqueName: \"kubernetes.io/projected/a5531fe2-10b7-40a9-8fb7-ec6299352239-kube-api-access-rhjsm\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.606217 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/a5531fe2-10b7-40a9-8fb7-ec6299352239-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.606242 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.606266 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/a5531fe2-10b7-40a9-8fb7-ec6299352239-builder-dockercfg-fwd7j-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.707074 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.707126 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.707149 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhjsm\" (UniqueName: \"kubernetes.io/projected/a5531fe2-10b7-40a9-8fb7-ec6299352239-kube-api-access-rhjsm\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.707171 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/a5531fe2-10b7-40a9-8fb7-ec6299352239-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.707188 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.707253 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/a5531fe2-10b7-40a9-8fb7-ec6299352239-builder-dockercfg-fwd7j-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.707291 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5531fe2-10b7-40a9-8fb7-ec6299352239-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.707330 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.707356 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5531fe2-10b7-40a9-8fb7-ec6299352239-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.707382 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/a5531fe2-10b7-40a9-8fb7-ec6299352239-builder-dockercfg-fwd7j-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.707406 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.707428 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.707446 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.708249 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.708304 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5531fe2-10b7-40a9-8fb7-ec6299352239-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.708337 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5531fe2-10b7-40a9-8fb7-ec6299352239-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.708415 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.708416 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.708505 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.708563 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.708588 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.709332 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.714360 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/a5531fe2-10b7-40a9-8fb7-ec6299352239-builder-dockercfg-fwd7j-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.714402 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/a5531fe2-10b7-40a9-8fb7-ec6299352239-builder-dockercfg-fwd7j-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.719562 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/a5531fe2-10b7-40a9-8fb7-ec6299352239-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.729996 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhjsm\" (UniqueName: \"kubernetes.io/projected/a5531fe2-10b7-40a9-8fb7-ec6299352239-kube-api-access-rhjsm\") pod \"service-telemetry-framework-index-1-build\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:22 crc kubenswrapper[4805]: I1203 00:33:22.865776 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:33:23 crc kubenswrapper[4805]: I1203 00:33:23.288228 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 03 00:33:23 crc kubenswrapper[4805]: I1203 00:33:23.646465 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"a5531fe2-10b7-40a9-8fb7-ec6299352239","Type":"ContainerStarted","Data":"d34d3e4c8aebc8acda7a7eeb10f5399a204dc5e324d0fa48f632e78a0c1afd1b"} Dec 03 00:33:23 crc kubenswrapper[4805]: I1203 00:33:23.646866 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"a5531fe2-10b7-40a9-8fb7-ec6299352239","Type":"ContainerStarted","Data":"4929c7890489b6c412c0bc7c44857ef391f07cd1fab2146022d2ccc53eed79b4"} Dec 03 00:33:24 crc kubenswrapper[4805]: I1203 00:33:24.654974 4805 generic.go:334] "Generic (PLEG): container finished" podID="a5531fe2-10b7-40a9-8fb7-ec6299352239" containerID="d34d3e4c8aebc8acda7a7eeb10f5399a204dc5e324d0fa48f632e78a0c1afd1b" exitCode=0 Dec 03 00:33:24 crc kubenswrapper[4805]: I1203 00:33:24.655058 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"a5531fe2-10b7-40a9-8fb7-ec6299352239","Type":"ContainerDied","Data":"d34d3e4c8aebc8acda7a7eeb10f5399a204dc5e324d0fa48f632e78a0c1afd1b"} Dec 03 00:33:25 crc kubenswrapper[4805]: I1203 00:33:25.666324 4805 generic.go:334] "Generic (PLEG): container finished" podID="a5531fe2-10b7-40a9-8fb7-ec6299352239" containerID="4ff5034a94c671e61b7f58d70ddd2c1e2cdc47ce327e91e9f3d460537797facf" exitCode=0 Dec 03 00:33:25 crc kubenswrapper[4805]: I1203 00:33:25.666417 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"a5531fe2-10b7-40a9-8fb7-ec6299352239","Type":"ContainerDied","Data":"4ff5034a94c671e61b7f58d70ddd2c1e2cdc47ce327e91e9f3d460537797facf"} Dec 03 00:33:25 crc kubenswrapper[4805]: I1203 00:33:25.702129 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_a5531fe2-10b7-40a9-8fb7-ec6299352239/manage-dockerfile/0.log" Dec 03 00:33:26 crc kubenswrapper[4805]: I1203 00:33:26.676490 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"a5531fe2-10b7-40a9-8fb7-ec6299352239","Type":"ContainerStarted","Data":"930c7bbf7a69e84b130eaafab5143c411dc1001edd9e5ccfbc175beee352a76d"} Dec 03 00:33:26 crc kubenswrapper[4805]: I1203 00:33:26.711790 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-index-1-build" podStartSLOduration=4.71175592 podStartE2EDuration="4.71175592s" podCreationTimestamp="2025-12-03 00:33:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:33:26.703409133 +0000 UTC m=+1630.552371789" watchObservedRunningTime="2025-12-03 00:33:26.71175592 +0000 UTC m=+1630.560718566" Dec 03 00:33:40 crc kubenswrapper[4805]: I1203 00:33:40.036855 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bh8lg"] Dec 03 00:33:40 crc kubenswrapper[4805]: I1203 00:33:40.039405 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bh8lg" Dec 03 00:33:40 crc kubenswrapper[4805]: I1203 00:33:40.055901 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bh8lg"] Dec 03 00:33:40 crc kubenswrapper[4805]: I1203 00:33:40.175636 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9405450-b740-4c3c-9c78-3d3da3009036-utilities\") pod \"community-operators-bh8lg\" (UID: \"c9405450-b740-4c3c-9c78-3d3da3009036\") " pod="openshift-marketplace/community-operators-bh8lg" Dec 03 00:33:40 crc kubenswrapper[4805]: I1203 00:33:40.175713 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh2qp\" (UniqueName: \"kubernetes.io/projected/c9405450-b740-4c3c-9c78-3d3da3009036-kube-api-access-rh2qp\") pod \"community-operators-bh8lg\" (UID: \"c9405450-b740-4c3c-9c78-3d3da3009036\") " pod="openshift-marketplace/community-operators-bh8lg" Dec 03 00:33:40 crc kubenswrapper[4805]: I1203 00:33:40.175816 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9405450-b740-4c3c-9c78-3d3da3009036-catalog-content\") pod \"community-operators-bh8lg\" (UID: \"c9405450-b740-4c3c-9c78-3d3da3009036\") " pod="openshift-marketplace/community-operators-bh8lg" Dec 03 00:33:40 crc kubenswrapper[4805]: I1203 00:33:40.277913 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9405450-b740-4c3c-9c78-3d3da3009036-utilities\") pod \"community-operators-bh8lg\" (UID: \"c9405450-b740-4c3c-9c78-3d3da3009036\") " pod="openshift-marketplace/community-operators-bh8lg" Dec 03 00:33:40 crc kubenswrapper[4805]: I1203 00:33:40.278016 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh2qp\" (UniqueName: \"kubernetes.io/projected/c9405450-b740-4c3c-9c78-3d3da3009036-kube-api-access-rh2qp\") pod \"community-operators-bh8lg\" (UID: \"c9405450-b740-4c3c-9c78-3d3da3009036\") " pod="openshift-marketplace/community-operators-bh8lg" Dec 03 00:33:40 crc kubenswrapper[4805]: I1203 00:33:40.278079 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9405450-b740-4c3c-9c78-3d3da3009036-catalog-content\") pod \"community-operators-bh8lg\" (UID: \"c9405450-b740-4c3c-9c78-3d3da3009036\") " pod="openshift-marketplace/community-operators-bh8lg" Dec 03 00:33:40 crc kubenswrapper[4805]: I1203 00:33:40.278686 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9405450-b740-4c3c-9c78-3d3da3009036-utilities\") pod \"community-operators-bh8lg\" (UID: \"c9405450-b740-4c3c-9c78-3d3da3009036\") " pod="openshift-marketplace/community-operators-bh8lg" Dec 03 00:33:40 crc kubenswrapper[4805]: I1203 00:33:40.278775 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9405450-b740-4c3c-9c78-3d3da3009036-catalog-content\") pod \"community-operators-bh8lg\" (UID: \"c9405450-b740-4c3c-9c78-3d3da3009036\") " pod="openshift-marketplace/community-operators-bh8lg" Dec 03 00:33:40 crc kubenswrapper[4805]: I1203 00:33:40.307633 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh2qp\" (UniqueName: \"kubernetes.io/projected/c9405450-b740-4c3c-9c78-3d3da3009036-kube-api-access-rh2qp\") pod \"community-operators-bh8lg\" (UID: \"c9405450-b740-4c3c-9c78-3d3da3009036\") " pod="openshift-marketplace/community-operators-bh8lg" Dec 03 00:33:40 crc kubenswrapper[4805]: I1203 00:33:40.409402 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bh8lg" Dec 03 00:33:40 crc kubenswrapper[4805]: I1203 00:33:40.724764 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bh8lg"] Dec 03 00:33:40 crc kubenswrapper[4805]: I1203 00:33:40.834391 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh8lg" event={"ID":"c9405450-b740-4c3c-9c78-3d3da3009036","Type":"ContainerStarted","Data":"81ac973b3f03da5c25709f75c10ebb6ab3f4d795ae3fe830c89e0262e3986231"} Dec 03 00:33:42 crc kubenswrapper[4805]: I1203 00:33:42.849489 4805 generic.go:334] "Generic (PLEG): container finished" podID="c9405450-b740-4c3c-9c78-3d3da3009036" containerID="26d9dd2fc29214548b0418ab279e91a44ef373c7a6c59f61fa7fa62e31b2c548" exitCode=0 Dec 03 00:33:42 crc kubenswrapper[4805]: I1203 00:33:42.849566 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh8lg" event={"ID":"c9405450-b740-4c3c-9c78-3d3da3009036","Type":"ContainerDied","Data":"26d9dd2fc29214548b0418ab279e91a44ef373c7a6c59f61fa7fa62e31b2c548"} Dec 03 00:33:44 crc kubenswrapper[4805]: I1203 00:33:44.866131 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh8lg" event={"ID":"c9405450-b740-4c3c-9c78-3d3da3009036","Type":"ContainerStarted","Data":"bfe1949c7645b5e3900134aeb20d816041ee6c8521193dbcf4e67196a0749d35"} Dec 03 00:33:45 crc kubenswrapper[4805]: I1203 00:33:45.876889 4805 generic.go:334] "Generic (PLEG): container finished" podID="c9405450-b740-4c3c-9c78-3d3da3009036" containerID="bfe1949c7645b5e3900134aeb20d816041ee6c8521193dbcf4e67196a0749d35" exitCode=0 Dec 03 00:33:45 crc kubenswrapper[4805]: I1203 00:33:45.877001 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh8lg" event={"ID":"c9405450-b740-4c3c-9c78-3d3da3009036","Type":"ContainerDied","Data":"bfe1949c7645b5e3900134aeb20d816041ee6c8521193dbcf4e67196a0749d35"} Dec 03 00:33:47 crc kubenswrapper[4805]: I1203 00:33:47.812180 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:33:47 crc kubenswrapper[4805]: I1203 00:33:47.812646 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:33:47 crc kubenswrapper[4805]: I1203 00:33:47.894565 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh8lg" event={"ID":"c9405450-b740-4c3c-9c78-3d3da3009036","Type":"ContainerStarted","Data":"c7d0c982af9305aa25f416186fe2591a0c70fc6c0b6f2f3c7d6954d9dfd09cf7"} Dec 03 00:33:47 crc kubenswrapper[4805]: I1203 00:33:47.933610 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bh8lg" podStartSLOduration=5.028979247 podStartE2EDuration="7.933572125s" podCreationTimestamp="2025-12-03 00:33:40 +0000 UTC" firstStartedPulling="2025-12-03 00:33:43.859968472 +0000 UTC m=+1647.708931088" lastFinishedPulling="2025-12-03 00:33:46.76456136 +0000 UTC m=+1650.613523966" observedRunningTime="2025-12-03 00:33:47.924550661 +0000 UTC m=+1651.773513277" watchObservedRunningTime="2025-12-03 00:33:47.933572125 +0000 UTC m=+1651.782534771" Dec 03 00:33:50 crc kubenswrapper[4805]: I1203 00:33:50.410715 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bh8lg" Dec 03 00:33:50 crc kubenswrapper[4805]: I1203 00:33:50.411120 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bh8lg" Dec 03 00:33:50 crc kubenswrapper[4805]: I1203 00:33:50.461554 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bh8lg" Dec 03 00:34:00 crc kubenswrapper[4805]: I1203 00:34:00.475300 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bh8lg" Dec 03 00:34:00 crc kubenswrapper[4805]: I1203 00:34:00.526554 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bh8lg"] Dec 03 00:34:00 crc kubenswrapper[4805]: I1203 00:34:00.996237 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bh8lg" podUID="c9405450-b740-4c3c-9c78-3d3da3009036" containerName="registry-server" containerID="cri-o://c7d0c982af9305aa25f416186fe2591a0c70fc6c0b6f2f3c7d6954d9dfd09cf7" gracePeriod=2 Dec 03 00:34:03 crc kubenswrapper[4805]: I1203 00:34:03.812075 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bh8lg" Dec 03 00:34:03 crc kubenswrapper[4805]: I1203 00:34:03.913413 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh2qp\" (UniqueName: \"kubernetes.io/projected/c9405450-b740-4c3c-9c78-3d3da3009036-kube-api-access-rh2qp\") pod \"c9405450-b740-4c3c-9c78-3d3da3009036\" (UID: \"c9405450-b740-4c3c-9c78-3d3da3009036\") " Dec 03 00:34:03 crc kubenswrapper[4805]: I1203 00:34:03.913537 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9405450-b740-4c3c-9c78-3d3da3009036-utilities\") pod \"c9405450-b740-4c3c-9c78-3d3da3009036\" (UID: \"c9405450-b740-4c3c-9c78-3d3da3009036\") " Dec 03 00:34:03 crc kubenswrapper[4805]: I1203 00:34:03.913578 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9405450-b740-4c3c-9c78-3d3da3009036-catalog-content\") pod \"c9405450-b740-4c3c-9c78-3d3da3009036\" (UID: \"c9405450-b740-4c3c-9c78-3d3da3009036\") " Dec 03 00:34:03 crc kubenswrapper[4805]: I1203 00:34:03.914569 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9405450-b740-4c3c-9c78-3d3da3009036-utilities" (OuterVolumeSpecName: "utilities") pod "c9405450-b740-4c3c-9c78-3d3da3009036" (UID: "c9405450-b740-4c3c-9c78-3d3da3009036"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:34:03 crc kubenswrapper[4805]: I1203 00:34:03.920707 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9405450-b740-4c3c-9c78-3d3da3009036-kube-api-access-rh2qp" (OuterVolumeSpecName: "kube-api-access-rh2qp") pod "c9405450-b740-4c3c-9c78-3d3da3009036" (UID: "c9405450-b740-4c3c-9c78-3d3da3009036"). InnerVolumeSpecName "kube-api-access-rh2qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:34:03 crc kubenswrapper[4805]: I1203 00:34:03.964269 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9405450-b740-4c3c-9c78-3d3da3009036-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9405450-b740-4c3c-9c78-3d3da3009036" (UID: "c9405450-b740-4c3c-9c78-3d3da3009036"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:34:04 crc kubenswrapper[4805]: I1203 00:34:04.018641 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9405450-b740-4c3c-9c78-3d3da3009036-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:04 crc kubenswrapper[4805]: I1203 00:34:04.018692 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9405450-b740-4c3c-9c78-3d3da3009036-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:04 crc kubenswrapper[4805]: I1203 00:34:04.018711 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh2qp\" (UniqueName: \"kubernetes.io/projected/c9405450-b740-4c3c-9c78-3d3da3009036-kube-api-access-rh2qp\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:04 crc kubenswrapper[4805]: I1203 00:34:04.022685 4805 generic.go:334] "Generic (PLEG): container finished" podID="c9405450-b740-4c3c-9c78-3d3da3009036" containerID="c7d0c982af9305aa25f416186fe2591a0c70fc6c0b6f2f3c7d6954d9dfd09cf7" exitCode=0 Dec 03 00:34:04 crc kubenswrapper[4805]: I1203 00:34:04.022734 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh8lg" event={"ID":"c9405450-b740-4c3c-9c78-3d3da3009036","Type":"ContainerDied","Data":"c7d0c982af9305aa25f416186fe2591a0c70fc6c0b6f2f3c7d6954d9dfd09cf7"} Dec 03 00:34:04 crc kubenswrapper[4805]: I1203 00:34:04.022767 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh8lg" event={"ID":"c9405450-b740-4c3c-9c78-3d3da3009036","Type":"ContainerDied","Data":"81ac973b3f03da5c25709f75c10ebb6ab3f4d795ae3fe830c89e0262e3986231"} Dec 03 00:34:04 crc kubenswrapper[4805]: I1203 00:34:04.022790 4805 scope.go:117] "RemoveContainer" containerID="c7d0c982af9305aa25f416186fe2591a0c70fc6c0b6f2f3c7d6954d9dfd09cf7" Dec 03 00:34:04 crc kubenswrapper[4805]: I1203 00:34:04.022931 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bh8lg" Dec 03 00:34:04 crc kubenswrapper[4805]: I1203 00:34:04.043720 4805 scope.go:117] "RemoveContainer" containerID="bfe1949c7645b5e3900134aeb20d816041ee6c8521193dbcf4e67196a0749d35" Dec 03 00:34:04 crc kubenswrapper[4805]: I1203 00:34:04.057661 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bh8lg"] Dec 03 00:34:04 crc kubenswrapper[4805]: I1203 00:34:04.064513 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bh8lg"] Dec 03 00:34:04 crc kubenswrapper[4805]: I1203 00:34:04.091846 4805 scope.go:117] "RemoveContainer" containerID="26d9dd2fc29214548b0418ab279e91a44ef373c7a6c59f61fa7fa62e31b2c548" Dec 03 00:34:04 crc kubenswrapper[4805]: I1203 00:34:04.116045 4805 scope.go:117] "RemoveContainer" containerID="c7d0c982af9305aa25f416186fe2591a0c70fc6c0b6f2f3c7d6954d9dfd09cf7" Dec 03 00:34:04 crc kubenswrapper[4805]: E1203 00:34:04.117485 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7d0c982af9305aa25f416186fe2591a0c70fc6c0b6f2f3c7d6954d9dfd09cf7\": container with ID starting with c7d0c982af9305aa25f416186fe2591a0c70fc6c0b6f2f3c7d6954d9dfd09cf7 not found: ID does not exist" containerID="c7d0c982af9305aa25f416186fe2591a0c70fc6c0b6f2f3c7d6954d9dfd09cf7" Dec 03 00:34:04 crc kubenswrapper[4805]: I1203 00:34:04.117545 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7d0c982af9305aa25f416186fe2591a0c70fc6c0b6f2f3c7d6954d9dfd09cf7"} err="failed to get container status \"c7d0c982af9305aa25f416186fe2591a0c70fc6c0b6f2f3c7d6954d9dfd09cf7\": rpc error: code = NotFound desc = could not find container \"c7d0c982af9305aa25f416186fe2591a0c70fc6c0b6f2f3c7d6954d9dfd09cf7\": container with ID starting with c7d0c982af9305aa25f416186fe2591a0c70fc6c0b6f2f3c7d6954d9dfd09cf7 not found: ID does not exist" Dec 03 00:34:04 crc kubenswrapper[4805]: I1203 00:34:04.117586 4805 scope.go:117] "RemoveContainer" containerID="bfe1949c7645b5e3900134aeb20d816041ee6c8521193dbcf4e67196a0749d35" Dec 03 00:34:04 crc kubenswrapper[4805]: E1203 00:34:04.118292 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfe1949c7645b5e3900134aeb20d816041ee6c8521193dbcf4e67196a0749d35\": container with ID starting with bfe1949c7645b5e3900134aeb20d816041ee6c8521193dbcf4e67196a0749d35 not found: ID does not exist" containerID="bfe1949c7645b5e3900134aeb20d816041ee6c8521193dbcf4e67196a0749d35" Dec 03 00:34:04 crc kubenswrapper[4805]: I1203 00:34:04.118349 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfe1949c7645b5e3900134aeb20d816041ee6c8521193dbcf4e67196a0749d35"} err="failed to get container status \"bfe1949c7645b5e3900134aeb20d816041ee6c8521193dbcf4e67196a0749d35\": rpc error: code = NotFound desc = could not find container \"bfe1949c7645b5e3900134aeb20d816041ee6c8521193dbcf4e67196a0749d35\": container with ID starting with bfe1949c7645b5e3900134aeb20d816041ee6c8521193dbcf4e67196a0749d35 not found: ID does not exist" Dec 03 00:34:04 crc kubenswrapper[4805]: I1203 00:34:04.118386 4805 scope.go:117] "RemoveContainer" containerID="26d9dd2fc29214548b0418ab279e91a44ef373c7a6c59f61fa7fa62e31b2c548" Dec 03 00:34:04 crc kubenswrapper[4805]: E1203 00:34:04.118806 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26d9dd2fc29214548b0418ab279e91a44ef373c7a6c59f61fa7fa62e31b2c548\": container with ID starting with 26d9dd2fc29214548b0418ab279e91a44ef373c7a6c59f61fa7fa62e31b2c548 not found: ID does not exist" containerID="26d9dd2fc29214548b0418ab279e91a44ef373c7a6c59f61fa7fa62e31b2c548" Dec 03 00:34:04 crc kubenswrapper[4805]: I1203 00:34:04.118843 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26d9dd2fc29214548b0418ab279e91a44ef373c7a6c59f61fa7fa62e31b2c548"} err="failed to get container status \"26d9dd2fc29214548b0418ab279e91a44ef373c7a6c59f61fa7fa62e31b2c548\": rpc error: code = NotFound desc = could not find container \"26d9dd2fc29214548b0418ab279e91a44ef373c7a6c59f61fa7fa62e31b2c548\": container with ID starting with 26d9dd2fc29214548b0418ab279e91a44ef373c7a6c59f61fa7fa62e31b2c548 not found: ID does not exist" Dec 03 00:34:04 crc kubenswrapper[4805]: I1203 00:34:04.450170 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9405450-b740-4c3c-9c78-3d3da3009036" path="/var/lib/kubelet/pods/c9405450-b740-4c3c-9c78-3d3da3009036/volumes" Dec 03 00:34:07 crc kubenswrapper[4805]: I1203 00:34:07.054470 4805 generic.go:334] "Generic (PLEG): container finished" podID="a5531fe2-10b7-40a9-8fb7-ec6299352239" containerID="930c7bbf7a69e84b130eaafab5143c411dc1001edd9e5ccfbc175beee352a76d" exitCode=0 Dec 03 00:34:07 crc kubenswrapper[4805]: I1203 00:34:07.054760 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"a5531fe2-10b7-40a9-8fb7-ec6299352239","Type":"ContainerDied","Data":"930c7bbf7a69e84b130eaafab5143c411dc1001edd9e5ccfbc175beee352a76d"} Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.331473 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.493510 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5531fe2-10b7-40a9-8fb7-ec6299352239-buildcachedir\") pod \"a5531fe2-10b7-40a9-8fb7-ec6299352239\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.493610 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-container-storage-root\") pod \"a5531fe2-10b7-40a9-8fb7-ec6299352239\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.493691 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-blob-cache\") pod \"a5531fe2-10b7-40a9-8fb7-ec6299352239\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.493700 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5531fe2-10b7-40a9-8fb7-ec6299352239-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "a5531fe2-10b7-40a9-8fb7-ec6299352239" (UID: "a5531fe2-10b7-40a9-8fb7-ec6299352239"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.493740 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-buildworkdir\") pod \"a5531fe2-10b7-40a9-8fb7-ec6299352239\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.493835 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhjsm\" (UniqueName: \"kubernetes.io/projected/a5531fe2-10b7-40a9-8fb7-ec6299352239-kube-api-access-rhjsm\") pod \"a5531fe2-10b7-40a9-8fb7-ec6299352239\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.493912 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-system-configs\") pod \"a5531fe2-10b7-40a9-8fb7-ec6299352239\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.493952 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-container-storage-run\") pod \"a5531fe2-10b7-40a9-8fb7-ec6299352239\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.493985 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5531fe2-10b7-40a9-8fb7-ec6299352239-node-pullsecrets\") pod \"a5531fe2-10b7-40a9-8fb7-ec6299352239\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.494024 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/a5531fe2-10b7-40a9-8fb7-ec6299352239-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"a5531fe2-10b7-40a9-8fb7-ec6299352239\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.494069 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-ca-bundles\") pod \"a5531fe2-10b7-40a9-8fb7-ec6299352239\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.494126 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/a5531fe2-10b7-40a9-8fb7-ec6299352239-builder-dockercfg-fwd7j-push\") pod \"a5531fe2-10b7-40a9-8fb7-ec6299352239\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.494318 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5531fe2-10b7-40a9-8fb7-ec6299352239-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a5531fe2-10b7-40a9-8fb7-ec6299352239" (UID: "a5531fe2-10b7-40a9-8fb7-ec6299352239"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.494167 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/a5531fe2-10b7-40a9-8fb7-ec6299352239-builder-dockercfg-fwd7j-pull\") pod \"a5531fe2-10b7-40a9-8fb7-ec6299352239\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.494442 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-proxy-ca-bundles\") pod \"a5531fe2-10b7-40a9-8fb7-ec6299352239\" (UID: \"a5531fe2-10b7-40a9-8fb7-ec6299352239\") " Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.494535 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "a5531fe2-10b7-40a9-8fb7-ec6299352239" (UID: "a5531fe2-10b7-40a9-8fb7-ec6299352239"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.494965 4805 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.494986 4805 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5531fe2-10b7-40a9-8fb7-ec6299352239-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.494997 4805 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5531fe2-10b7-40a9-8fb7-ec6299352239-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.495236 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "a5531fe2-10b7-40a9-8fb7-ec6299352239" (UID: "a5531fe2-10b7-40a9-8fb7-ec6299352239"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.495509 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "a5531fe2-10b7-40a9-8fb7-ec6299352239" (UID: "a5531fe2-10b7-40a9-8fb7-ec6299352239"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.495551 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "a5531fe2-10b7-40a9-8fb7-ec6299352239" (UID: "a5531fe2-10b7-40a9-8fb7-ec6299352239"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.496212 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "a5531fe2-10b7-40a9-8fb7-ec6299352239" (UID: "a5531fe2-10b7-40a9-8fb7-ec6299352239"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.500242 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5531fe2-10b7-40a9-8fb7-ec6299352239-kube-api-access-rhjsm" (OuterVolumeSpecName: "kube-api-access-rhjsm") pod "a5531fe2-10b7-40a9-8fb7-ec6299352239" (UID: "a5531fe2-10b7-40a9-8fb7-ec6299352239"). InnerVolumeSpecName "kube-api-access-rhjsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.500315 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5531fe2-10b7-40a9-8fb7-ec6299352239-builder-dockercfg-fwd7j-pull" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-pull") pod "a5531fe2-10b7-40a9-8fb7-ec6299352239" (UID: "a5531fe2-10b7-40a9-8fb7-ec6299352239"). InnerVolumeSpecName "builder-dockercfg-fwd7j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.500336 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5531fe2-10b7-40a9-8fb7-ec6299352239-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "a5531fe2-10b7-40a9-8fb7-ec6299352239" (UID: "a5531fe2-10b7-40a9-8fb7-ec6299352239"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.500378 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5531fe2-10b7-40a9-8fb7-ec6299352239-builder-dockercfg-fwd7j-push" (OuterVolumeSpecName: "builder-dockercfg-fwd7j-push") pod "a5531fe2-10b7-40a9-8fb7-ec6299352239" (UID: "a5531fe2-10b7-40a9-8fb7-ec6299352239"). InnerVolumeSpecName "builder-dockercfg-fwd7j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.596401 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhjsm\" (UniqueName: \"kubernetes.io/projected/a5531fe2-10b7-40a9-8fb7-ec6299352239-kube-api-access-rhjsm\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.596428 4805 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.596437 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.596447 4805 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/a5531fe2-10b7-40a9-8fb7-ec6299352239-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.596457 4805 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.596468 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-push\" (UniqueName: \"kubernetes.io/secret/a5531fe2-10b7-40a9-8fb7-ec6299352239-builder-dockercfg-fwd7j-push\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.596492 4805 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-fwd7j-pull\" (UniqueName: \"kubernetes.io/secret/a5531fe2-10b7-40a9-8fb7-ec6299352239-builder-dockercfg-fwd7j-pull\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.596504 4805 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.722531 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "a5531fe2-10b7-40a9-8fb7-ec6299352239" (UID: "a5531fe2-10b7-40a9-8fb7-ec6299352239"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:34:08 crc kubenswrapper[4805]: I1203 00:34:08.799648 4805 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:09 crc kubenswrapper[4805]: I1203 00:34:09.068379 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"a5531fe2-10b7-40a9-8fb7-ec6299352239","Type":"ContainerDied","Data":"4929c7890489b6c412c0bc7c44857ef391f07cd1fab2146022d2ccc53eed79b4"} Dec 03 00:34:09 crc kubenswrapper[4805]: I1203 00:34:09.069038 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4929c7890489b6c412c0bc7c44857ef391f07cd1fab2146022d2ccc53eed79b4" Dec 03 00:34:09 crc kubenswrapper[4805]: I1203 00:34:09.068445 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 03 00:34:09 crc kubenswrapper[4805]: I1203 00:34:09.472483 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "a5531fe2-10b7-40a9-8fb7-ec6299352239" (UID: "a5531fe2-10b7-40a9-8fb7-ec6299352239"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:34:09 crc kubenswrapper[4805]: I1203 00:34:09.508438 4805 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5531fe2-10b7-40a9-8fb7-ec6299352239-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:10 crc kubenswrapper[4805]: I1203 00:34:10.168475 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-operators-rnpgw"] Dec 03 00:34:10 crc kubenswrapper[4805]: E1203 00:34:10.169008 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5531fe2-10b7-40a9-8fb7-ec6299352239" containerName="manage-dockerfile" Dec 03 00:34:10 crc kubenswrapper[4805]: I1203 00:34:10.169093 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5531fe2-10b7-40a9-8fb7-ec6299352239" containerName="manage-dockerfile" Dec 03 00:34:10 crc kubenswrapper[4805]: E1203 00:34:10.169159 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9405450-b740-4c3c-9c78-3d3da3009036" containerName="extract-utilities" Dec 03 00:34:10 crc kubenswrapper[4805]: I1203 00:34:10.169246 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9405450-b740-4c3c-9c78-3d3da3009036" containerName="extract-utilities" Dec 03 00:34:10 crc kubenswrapper[4805]: E1203 00:34:10.169360 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9405450-b740-4c3c-9c78-3d3da3009036" containerName="extract-content" Dec 03 00:34:10 crc kubenswrapper[4805]: I1203 00:34:10.169414 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9405450-b740-4c3c-9c78-3d3da3009036" containerName="extract-content" Dec 03 00:34:10 crc kubenswrapper[4805]: E1203 00:34:10.169481 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5531fe2-10b7-40a9-8fb7-ec6299352239" containerName="git-clone" Dec 03 00:34:10 crc kubenswrapper[4805]: I1203 00:34:10.169539 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5531fe2-10b7-40a9-8fb7-ec6299352239" containerName="git-clone" Dec 03 00:34:10 crc kubenswrapper[4805]: E1203 00:34:10.169599 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5531fe2-10b7-40a9-8fb7-ec6299352239" containerName="docker-build" Dec 03 00:34:10 crc kubenswrapper[4805]: I1203 00:34:10.169653 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5531fe2-10b7-40a9-8fb7-ec6299352239" containerName="docker-build" Dec 03 00:34:10 crc kubenswrapper[4805]: E1203 00:34:10.169713 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9405450-b740-4c3c-9c78-3d3da3009036" containerName="registry-server" Dec 03 00:34:10 crc kubenswrapper[4805]: I1203 00:34:10.169771 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9405450-b740-4c3c-9c78-3d3da3009036" containerName="registry-server" Dec 03 00:34:10 crc kubenswrapper[4805]: I1203 00:34:10.169925 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5531fe2-10b7-40a9-8fb7-ec6299352239" containerName="docker-build" Dec 03 00:34:10 crc kubenswrapper[4805]: I1203 00:34:10.169997 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9405450-b740-4c3c-9c78-3d3da3009036" containerName="registry-server" Dec 03 00:34:10 crc kubenswrapper[4805]: I1203 00:34:10.170529 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-rnpgw" Dec 03 00:34:10 crc kubenswrapper[4805]: I1203 00:34:10.178422 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-operators-dockercfg-tgvbd" Dec 03 00:34:10 crc kubenswrapper[4805]: I1203 00:34:10.179818 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-rnpgw"] Dec 03 00:34:10 crc kubenswrapper[4805]: I1203 00:34:10.317193 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw9c9\" (UniqueName: \"kubernetes.io/projected/7939bf48-d1cc-4bc8-a1a1-f1cfd58f4d55-kube-api-access-jw9c9\") pod \"service-telemetry-framework-operators-rnpgw\" (UID: \"7939bf48-d1cc-4bc8-a1a1-f1cfd58f4d55\") " pod="service-telemetry/service-telemetry-framework-operators-rnpgw" Dec 03 00:34:10 crc kubenswrapper[4805]: I1203 00:34:10.418819 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw9c9\" (UniqueName: \"kubernetes.io/projected/7939bf48-d1cc-4bc8-a1a1-f1cfd58f4d55-kube-api-access-jw9c9\") pod \"service-telemetry-framework-operators-rnpgw\" (UID: \"7939bf48-d1cc-4bc8-a1a1-f1cfd58f4d55\") " pod="service-telemetry/service-telemetry-framework-operators-rnpgw" Dec 03 00:34:10 crc kubenswrapper[4805]: I1203 00:34:10.439864 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw9c9\" (UniqueName: \"kubernetes.io/projected/7939bf48-d1cc-4bc8-a1a1-f1cfd58f4d55-kube-api-access-jw9c9\") pod \"service-telemetry-framework-operators-rnpgw\" (UID: \"7939bf48-d1cc-4bc8-a1a1-f1cfd58f4d55\") " pod="service-telemetry/service-telemetry-framework-operators-rnpgw" Dec 03 00:34:10 crc kubenswrapper[4805]: I1203 00:34:10.526572 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-rnpgw" Dec 03 00:34:10 crc kubenswrapper[4805]: I1203 00:34:10.723454 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-rnpgw"] Dec 03 00:34:11 crc kubenswrapper[4805]: I1203 00:34:11.083909 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-rnpgw" event={"ID":"7939bf48-d1cc-4bc8-a1a1-f1cfd58f4d55","Type":"ContainerStarted","Data":"6c0ca079d6657e37d3878f3f1d7ca9429864dfb347db0bd6ed9d2f1a7f9b2275"} Dec 03 00:34:15 crc kubenswrapper[4805]: I1203 00:34:15.959078 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-rnpgw"] Dec 03 00:34:16 crc kubenswrapper[4805]: I1203 00:34:16.769662 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-operators-dd72f"] Dec 03 00:34:16 crc kubenswrapper[4805]: I1203 00:34:16.770983 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-dd72f" Dec 03 00:34:16 crc kubenswrapper[4805]: I1203 00:34:16.784192 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-dd72f"] Dec 03 00:34:16 crc kubenswrapper[4805]: I1203 00:34:16.840601 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twj5g\" (UniqueName: \"kubernetes.io/projected/235f8cfd-d642-43cd-bcb4-ddbc91b2a4fc-kube-api-access-twj5g\") pod \"service-telemetry-framework-operators-dd72f\" (UID: \"235f8cfd-d642-43cd-bcb4-ddbc91b2a4fc\") " pod="service-telemetry/service-telemetry-framework-operators-dd72f" Dec 03 00:34:16 crc kubenswrapper[4805]: I1203 00:34:16.941800 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twj5g\" (UniqueName: \"kubernetes.io/projected/235f8cfd-d642-43cd-bcb4-ddbc91b2a4fc-kube-api-access-twj5g\") pod \"service-telemetry-framework-operators-dd72f\" (UID: \"235f8cfd-d642-43cd-bcb4-ddbc91b2a4fc\") " pod="service-telemetry/service-telemetry-framework-operators-dd72f" Dec 03 00:34:16 crc kubenswrapper[4805]: I1203 00:34:16.965979 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twj5g\" (UniqueName: \"kubernetes.io/projected/235f8cfd-d642-43cd-bcb4-ddbc91b2a4fc-kube-api-access-twj5g\") pod \"service-telemetry-framework-operators-dd72f\" (UID: \"235f8cfd-d642-43cd-bcb4-ddbc91b2a4fc\") " pod="service-telemetry/service-telemetry-framework-operators-dd72f" Dec 03 00:34:17 crc kubenswrapper[4805]: I1203 00:34:17.136530 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-dd72f" Dec 03 00:34:17 crc kubenswrapper[4805]: I1203 00:34:17.811073 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:34:17 crc kubenswrapper[4805]: I1203 00:34:17.811164 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:34:17 crc kubenswrapper[4805]: I1203 00:34:17.811290 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:34:17 crc kubenswrapper[4805]: I1203 00:34:17.812090 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18"} pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:34:17 crc kubenswrapper[4805]: I1203 00:34:17.812173 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" containerID="cri-o://97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" gracePeriod=600 Dec 03 00:34:18 crc kubenswrapper[4805]: I1203 00:34:18.135490 4805 generic.go:334] "Generic (PLEG): container finished" podID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" exitCode=0 Dec 03 00:34:18 crc kubenswrapper[4805]: I1203 00:34:18.135554 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" event={"ID":"42d6da4d-d781-4243-b5c3-28a8cf91ef53","Type":"ContainerDied","Data":"97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18"} Dec 03 00:34:18 crc kubenswrapper[4805]: I1203 00:34:18.135602 4805 scope.go:117] "RemoveContainer" containerID="c5bd9b5c258ecf356c62660c4f09c420d8d82addbe1860d108970f42116ef5be" Dec 03 00:34:23 crc kubenswrapper[4805]: E1203 00:34:23.679796 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:34:24 crc kubenswrapper[4805]: I1203 00:34:24.181087 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:34:24 crc kubenswrapper[4805]: E1203 00:34:24.181614 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:34:26 crc kubenswrapper[4805]: I1203 00:34:26.913096 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-dd72f"] Dec 03 00:34:27 crc kubenswrapper[4805]: E1203 00:34:27.079768 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 03 00:34:27 crc kubenswrapper[4805]: E1203 00:34:27.079966 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jw9c9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-telemetry-framework-operators-rnpgw_service-telemetry(7939bf48-d1cc-4bc8-a1a1-f1cfd58f4d55): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 00:34:27 crc kubenswrapper[4805]: E1203 00:34:27.081329 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/service-telemetry-framework-operators-rnpgw" podUID="7939bf48-d1cc-4bc8-a1a1-f1cfd58f4d55" Dec 03 00:34:27 crc kubenswrapper[4805]: I1203 00:34:27.202630 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-dd72f" event={"ID":"235f8cfd-d642-43cd-bcb4-ddbc91b2a4fc","Type":"ContainerStarted","Data":"5a8c86c8f70f0ececfb13806a53dbe37f138127ae3b151b041a45e94e5b7c6f3"} Dec 03 00:34:27 crc kubenswrapper[4805]: I1203 00:34:27.429830 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-rnpgw" Dec 03 00:34:27 crc kubenswrapper[4805]: I1203 00:34:27.515975 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw9c9\" (UniqueName: \"kubernetes.io/projected/7939bf48-d1cc-4bc8-a1a1-f1cfd58f4d55-kube-api-access-jw9c9\") pod \"7939bf48-d1cc-4bc8-a1a1-f1cfd58f4d55\" (UID: \"7939bf48-d1cc-4bc8-a1a1-f1cfd58f4d55\") " Dec 03 00:34:27 crc kubenswrapper[4805]: I1203 00:34:27.522938 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7939bf48-d1cc-4bc8-a1a1-f1cfd58f4d55-kube-api-access-jw9c9" (OuterVolumeSpecName: "kube-api-access-jw9c9") pod "7939bf48-d1cc-4bc8-a1a1-f1cfd58f4d55" (UID: "7939bf48-d1cc-4bc8-a1a1-f1cfd58f4d55"). InnerVolumeSpecName "kube-api-access-jw9c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:34:27 crc kubenswrapper[4805]: I1203 00:34:27.617771 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw9c9\" (UniqueName: \"kubernetes.io/projected/7939bf48-d1cc-4bc8-a1a1-f1cfd58f4d55-kube-api-access-jw9c9\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:28 crc kubenswrapper[4805]: I1203 00:34:28.209377 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-rnpgw" Dec 03 00:34:28 crc kubenswrapper[4805]: I1203 00:34:28.209363 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-rnpgw" event={"ID":"7939bf48-d1cc-4bc8-a1a1-f1cfd58f4d55","Type":"ContainerDied","Data":"6c0ca079d6657e37d3878f3f1d7ca9429864dfb347db0bd6ed9d2f1a7f9b2275"} Dec 03 00:34:28 crc kubenswrapper[4805]: I1203 00:34:28.210841 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-dd72f" event={"ID":"235f8cfd-d642-43cd-bcb4-ddbc91b2a4fc","Type":"ContainerStarted","Data":"41a4cd34751da6f52c6ae4d347f420001eb932cebf4ff49f01a25b9f37d04023"} Dec 03 00:34:28 crc kubenswrapper[4805]: I1203 00:34:28.240279 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-operators-dd72f" podStartSLOduration=12.13375236 podStartE2EDuration="12.240246011s" podCreationTimestamp="2025-12-03 00:34:16 +0000 UTC" firstStartedPulling="2025-12-03 00:34:27.018243812 +0000 UTC m=+1690.867206418" lastFinishedPulling="2025-12-03 00:34:27.124737463 +0000 UTC m=+1690.973700069" observedRunningTime="2025-12-03 00:34:28.231533255 +0000 UTC m=+1692.080495911" watchObservedRunningTime="2025-12-03 00:34:28.240246011 +0000 UTC m=+1692.089208667" Dec 03 00:34:28 crc kubenswrapper[4805]: I1203 00:34:28.274692 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-rnpgw"] Dec 03 00:34:28 crc kubenswrapper[4805]: I1203 00:34:28.281769 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-rnpgw"] Dec 03 00:34:28 crc kubenswrapper[4805]: I1203 00:34:28.430336 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7939bf48-d1cc-4bc8-a1a1-f1cfd58f4d55" path="/var/lib/kubelet/pods/7939bf48-d1cc-4bc8-a1a1-f1cfd58f4d55/volumes" Dec 03 00:34:36 crc kubenswrapper[4805]: I1203 00:34:36.429731 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:34:36 crc kubenswrapper[4805]: E1203 00:34:36.430608 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:34:37 crc kubenswrapper[4805]: I1203 00:34:37.138144 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/service-telemetry-framework-operators-dd72f" Dec 03 00:34:37 crc kubenswrapper[4805]: I1203 00:34:37.138321 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/service-telemetry-framework-operators-dd72f" Dec 03 00:34:37 crc kubenswrapper[4805]: I1203 00:34:37.181973 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/service-telemetry-framework-operators-dd72f" Dec 03 00:34:37 crc kubenswrapper[4805]: I1203 00:34:37.296765 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/service-telemetry-framework-operators-dd72f" Dec 03 00:34:47 crc kubenswrapper[4805]: I1203 00:34:47.974324 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x"] Dec 03 00:34:47 crc kubenswrapper[4805]: I1203 00:34:47.976871 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x" Dec 03 00:34:47 crc kubenswrapper[4805]: I1203 00:34:47.982561 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x"] Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.070599 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ab62d29-36cf-45a1-8a41-2f0f35aed445-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x\" (UID: \"7ab62d29-36cf-45a1-8a41-2f0f35aed445\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x" Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.070654 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ab62d29-36cf-45a1-8a41-2f0f35aed445-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x\" (UID: \"7ab62d29-36cf-45a1-8a41-2f0f35aed445\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x" Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.070825 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhk9g\" (UniqueName: \"kubernetes.io/projected/7ab62d29-36cf-45a1-8a41-2f0f35aed445-kube-api-access-qhk9g\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x\" (UID: \"7ab62d29-36cf-45a1-8a41-2f0f35aed445\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x" Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.172402 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhk9g\" (UniqueName: \"kubernetes.io/projected/7ab62d29-36cf-45a1-8a41-2f0f35aed445-kube-api-access-qhk9g\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x\" (UID: \"7ab62d29-36cf-45a1-8a41-2f0f35aed445\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x" Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.172482 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ab62d29-36cf-45a1-8a41-2f0f35aed445-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x\" (UID: \"7ab62d29-36cf-45a1-8a41-2f0f35aed445\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x" Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.172519 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ab62d29-36cf-45a1-8a41-2f0f35aed445-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x\" (UID: \"7ab62d29-36cf-45a1-8a41-2f0f35aed445\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x" Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.173096 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ab62d29-36cf-45a1-8a41-2f0f35aed445-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x\" (UID: \"7ab62d29-36cf-45a1-8a41-2f0f35aed445\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x" Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.173190 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ab62d29-36cf-45a1-8a41-2f0f35aed445-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x\" (UID: \"7ab62d29-36cf-45a1-8a41-2f0f35aed445\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x" Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.192339 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhk9g\" (UniqueName: \"kubernetes.io/projected/7ab62d29-36cf-45a1-8a41-2f0f35aed445-kube-api-access-qhk9g\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x\" (UID: \"7ab62d29-36cf-45a1-8a41-2f0f35aed445\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x" Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.295637 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x" Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.500175 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x"] Dec 03 00:34:48 crc kubenswrapper[4805]: W1203 00:34:48.506030 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ab62d29_36cf_45a1_8a41_2f0f35aed445.slice/crio-9dd1ff4f9183b4016cd7865c0fc91f1b43831df0e28e55daa365501c83f65e9c WatchSource:0}: Error finding container 9dd1ff4f9183b4016cd7865c0fc91f1b43831df0e28e55daa365501c83f65e9c: Status 404 returned error can't find the container with id 9dd1ff4f9183b4016cd7865c0fc91f1b43831df0e28e55daa365501c83f65e9c Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.766348 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd"] Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.767518 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd" Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.779963 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/882395a5-2513-4b04-b9f1-a7f6e301b3bb-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd\" (UID: \"882395a5-2513-4b04-b9f1-a7f6e301b3bb\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd" Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.780002 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/882395a5-2513-4b04-b9f1-a7f6e301b3bb-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd\" (UID: \"882395a5-2513-4b04-b9f1-a7f6e301b3bb\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd" Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.780039 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4sjl\" (UniqueName: \"kubernetes.io/projected/882395a5-2513-4b04-b9f1-a7f6e301b3bb-kube-api-access-v4sjl\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd\" (UID: \"882395a5-2513-4b04-b9f1-a7f6e301b3bb\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd" Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.783979 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd"] Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.881172 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/882395a5-2513-4b04-b9f1-a7f6e301b3bb-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd\" (UID: \"882395a5-2513-4b04-b9f1-a7f6e301b3bb\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd" Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.881228 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/882395a5-2513-4b04-b9f1-a7f6e301b3bb-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd\" (UID: \"882395a5-2513-4b04-b9f1-a7f6e301b3bb\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd" Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.881260 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4sjl\" (UniqueName: \"kubernetes.io/projected/882395a5-2513-4b04-b9f1-a7f6e301b3bb-kube-api-access-v4sjl\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd\" (UID: \"882395a5-2513-4b04-b9f1-a7f6e301b3bb\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd" Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.881779 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/882395a5-2513-4b04-b9f1-a7f6e301b3bb-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd\" (UID: \"882395a5-2513-4b04-b9f1-a7f6e301b3bb\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd" Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.881938 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/882395a5-2513-4b04-b9f1-a7f6e301b3bb-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd\" (UID: \"882395a5-2513-4b04-b9f1-a7f6e301b3bb\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd" Dec 03 00:34:48 crc kubenswrapper[4805]: I1203 00:34:48.901367 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4sjl\" (UniqueName: \"kubernetes.io/projected/882395a5-2513-4b04-b9f1-a7f6e301b3bb-kube-api-access-v4sjl\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd\" (UID: \"882395a5-2513-4b04-b9f1-a7f6e301b3bb\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd" Dec 03 00:34:49 crc kubenswrapper[4805]: I1203 00:34:49.087875 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd" Dec 03 00:34:49 crc kubenswrapper[4805]: I1203 00:34:49.377105 4805 generic.go:334] "Generic (PLEG): container finished" podID="7ab62d29-36cf-45a1-8a41-2f0f35aed445" containerID="5938f7a30dc3c5fd9a67ca89f0cfbfd282bcdcad8ddae85f658e3a8f0107ddb8" exitCode=0 Dec 03 00:34:49 crc kubenswrapper[4805]: I1203 00:34:49.377154 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x" event={"ID":"7ab62d29-36cf-45a1-8a41-2f0f35aed445","Type":"ContainerDied","Data":"5938f7a30dc3c5fd9a67ca89f0cfbfd282bcdcad8ddae85f658e3a8f0107ddb8"} Dec 03 00:34:49 crc kubenswrapper[4805]: I1203 00:34:49.377183 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x" event={"ID":"7ab62d29-36cf-45a1-8a41-2f0f35aed445","Type":"ContainerStarted","Data":"9dd1ff4f9183b4016cd7865c0fc91f1b43831df0e28e55daa365501c83f65e9c"} Dec 03 00:34:49 crc kubenswrapper[4805]: I1203 00:34:49.556668 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd"] Dec 03 00:34:49 crc kubenswrapper[4805]: W1203 00:34:49.645873 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod882395a5_2513_4b04_b9f1_a7f6e301b3bb.slice/crio-6b78f207583e84fcad745ccc5aeb195c253a30d30ebd9247feb576b1f8e12f34 WatchSource:0}: Error finding container 6b78f207583e84fcad745ccc5aeb195c253a30d30ebd9247feb576b1f8e12f34: Status 404 returned error can't find the container with id 6b78f207583e84fcad745ccc5aeb195c253a30d30ebd9247feb576b1f8e12f34 Dec 03 00:34:50 crc kubenswrapper[4805]: I1203 00:34:50.388610 4805 generic.go:334] "Generic (PLEG): container finished" podID="7ab62d29-36cf-45a1-8a41-2f0f35aed445" containerID="a728cacf4443a52008294ac6188ef29bc3546fe3e27516eaf07f2e5618554499" exitCode=0 Dec 03 00:34:50 crc kubenswrapper[4805]: I1203 00:34:50.388700 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x" event={"ID":"7ab62d29-36cf-45a1-8a41-2f0f35aed445","Type":"ContainerDied","Data":"a728cacf4443a52008294ac6188ef29bc3546fe3e27516eaf07f2e5618554499"} Dec 03 00:34:50 crc kubenswrapper[4805]: I1203 00:34:50.390490 4805 generic.go:334] "Generic (PLEG): container finished" podID="882395a5-2513-4b04-b9f1-a7f6e301b3bb" containerID="b4ad286b3998ecf15c1bb40daf95d800d3bf84ce4c69a0d006b3252fbc361611" exitCode=0 Dec 03 00:34:50 crc kubenswrapper[4805]: I1203 00:34:50.390892 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd" event={"ID":"882395a5-2513-4b04-b9f1-a7f6e301b3bb","Type":"ContainerDied","Data":"b4ad286b3998ecf15c1bb40daf95d800d3bf84ce4c69a0d006b3252fbc361611"} Dec 03 00:34:50 crc kubenswrapper[4805]: I1203 00:34:50.390916 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd" event={"ID":"882395a5-2513-4b04-b9f1-a7f6e301b3bb","Type":"ContainerStarted","Data":"6b78f207583e84fcad745ccc5aeb195c253a30d30ebd9247feb576b1f8e12f34"} Dec 03 00:34:51 crc kubenswrapper[4805]: I1203 00:34:51.404101 4805 generic.go:334] "Generic (PLEG): container finished" podID="7ab62d29-36cf-45a1-8a41-2f0f35aed445" containerID="20c9931a8fe7ad38d8585f36c3c59e6203b96c1df3826195a218f8a63f1b3815" exitCode=0 Dec 03 00:34:51 crc kubenswrapper[4805]: I1203 00:34:51.404171 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x" event={"ID":"7ab62d29-36cf-45a1-8a41-2f0f35aed445","Type":"ContainerDied","Data":"20c9931a8fe7ad38d8585f36c3c59e6203b96c1df3826195a218f8a63f1b3815"} Dec 03 00:34:51 crc kubenswrapper[4805]: I1203 00:34:51.407393 4805 generic.go:334] "Generic (PLEG): container finished" podID="882395a5-2513-4b04-b9f1-a7f6e301b3bb" containerID="a631461fe5f3cbfa5fa8b92de3af557c5820d31beef4680525d83cb037b4e0ae" exitCode=0 Dec 03 00:34:51 crc kubenswrapper[4805]: I1203 00:34:51.407443 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd" event={"ID":"882395a5-2513-4b04-b9f1-a7f6e301b3bb","Type":"ContainerDied","Data":"a631461fe5f3cbfa5fa8b92de3af557c5820d31beef4680525d83cb037b4e0ae"} Dec 03 00:34:51 crc kubenswrapper[4805]: I1203 00:34:51.427373 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:34:51 crc kubenswrapper[4805]: E1203 00:34:51.427640 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:34:52 crc kubenswrapper[4805]: I1203 00:34:52.649307 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x" Dec 03 00:34:52 crc kubenswrapper[4805]: I1203 00:34:52.849935 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ab62d29-36cf-45a1-8a41-2f0f35aed445-bundle\") pod \"7ab62d29-36cf-45a1-8a41-2f0f35aed445\" (UID: \"7ab62d29-36cf-45a1-8a41-2f0f35aed445\") " Dec 03 00:34:52 crc kubenswrapper[4805]: I1203 00:34:52.849990 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhk9g\" (UniqueName: \"kubernetes.io/projected/7ab62d29-36cf-45a1-8a41-2f0f35aed445-kube-api-access-qhk9g\") pod \"7ab62d29-36cf-45a1-8a41-2f0f35aed445\" (UID: \"7ab62d29-36cf-45a1-8a41-2f0f35aed445\") " Dec 03 00:34:52 crc kubenswrapper[4805]: I1203 00:34:52.850113 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ab62d29-36cf-45a1-8a41-2f0f35aed445-util\") pod \"7ab62d29-36cf-45a1-8a41-2f0f35aed445\" (UID: \"7ab62d29-36cf-45a1-8a41-2f0f35aed445\") " Dec 03 00:34:52 crc kubenswrapper[4805]: I1203 00:34:52.850701 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ab62d29-36cf-45a1-8a41-2f0f35aed445-bundle" (OuterVolumeSpecName: "bundle") pod "7ab62d29-36cf-45a1-8a41-2f0f35aed445" (UID: "7ab62d29-36cf-45a1-8a41-2f0f35aed445"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:34:52 crc kubenswrapper[4805]: I1203 00:34:52.861795 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ab62d29-36cf-45a1-8a41-2f0f35aed445-kube-api-access-qhk9g" (OuterVolumeSpecName: "kube-api-access-qhk9g") pod "7ab62d29-36cf-45a1-8a41-2f0f35aed445" (UID: "7ab62d29-36cf-45a1-8a41-2f0f35aed445"). InnerVolumeSpecName "kube-api-access-qhk9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:34:52 crc kubenswrapper[4805]: I1203 00:34:52.866503 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ab62d29-36cf-45a1-8a41-2f0f35aed445-util" (OuterVolumeSpecName: "util") pod "7ab62d29-36cf-45a1-8a41-2f0f35aed445" (UID: "7ab62d29-36cf-45a1-8a41-2f0f35aed445"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:34:52 crc kubenswrapper[4805]: I1203 00:34:52.952053 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ab62d29-36cf-45a1-8a41-2f0f35aed445-util\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:52 crc kubenswrapper[4805]: I1203 00:34:52.952099 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ab62d29-36cf-45a1-8a41-2f0f35aed445-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:52 crc kubenswrapper[4805]: I1203 00:34:52.952111 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhk9g\" (UniqueName: \"kubernetes.io/projected/7ab62d29-36cf-45a1-8a41-2f0f35aed445-kube-api-access-qhk9g\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:53 crc kubenswrapper[4805]: I1203 00:34:53.427887 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x" Dec 03 00:34:53 crc kubenswrapper[4805]: I1203 00:34:53.427896 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65adzc4x" event={"ID":"7ab62d29-36cf-45a1-8a41-2f0f35aed445","Type":"ContainerDied","Data":"9dd1ff4f9183b4016cd7865c0fc91f1b43831df0e28e55daa365501c83f65e9c"} Dec 03 00:34:53 crc kubenswrapper[4805]: I1203 00:34:53.427939 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dd1ff4f9183b4016cd7865c0fc91f1b43831df0e28e55daa365501c83f65e9c" Dec 03 00:34:53 crc kubenswrapper[4805]: I1203 00:34:53.433588 4805 generic.go:334] "Generic (PLEG): container finished" podID="882395a5-2513-4b04-b9f1-a7f6e301b3bb" containerID="f96912b9e5498ea85217b02fdffa705fe920eb3490e45130bf3600d84dad861d" exitCode=0 Dec 03 00:34:53 crc kubenswrapper[4805]: I1203 00:34:53.433721 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd" event={"ID":"882395a5-2513-4b04-b9f1-a7f6e301b3bb","Type":"ContainerDied","Data":"f96912b9e5498ea85217b02fdffa705fe920eb3490e45130bf3600d84dad861d"} Dec 03 00:34:54 crc kubenswrapper[4805]: I1203 00:34:54.749917 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd" Dec 03 00:34:54 crc kubenswrapper[4805]: I1203 00:34:54.886180 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/882395a5-2513-4b04-b9f1-a7f6e301b3bb-bundle\") pod \"882395a5-2513-4b04-b9f1-a7f6e301b3bb\" (UID: \"882395a5-2513-4b04-b9f1-a7f6e301b3bb\") " Dec 03 00:34:54 crc kubenswrapper[4805]: I1203 00:34:54.886411 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4sjl\" (UniqueName: \"kubernetes.io/projected/882395a5-2513-4b04-b9f1-a7f6e301b3bb-kube-api-access-v4sjl\") pod \"882395a5-2513-4b04-b9f1-a7f6e301b3bb\" (UID: \"882395a5-2513-4b04-b9f1-a7f6e301b3bb\") " Dec 03 00:34:54 crc kubenswrapper[4805]: I1203 00:34:54.886470 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/882395a5-2513-4b04-b9f1-a7f6e301b3bb-util\") pod \"882395a5-2513-4b04-b9f1-a7f6e301b3bb\" (UID: \"882395a5-2513-4b04-b9f1-a7f6e301b3bb\") " Dec 03 00:34:54 crc kubenswrapper[4805]: I1203 00:34:54.887024 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/882395a5-2513-4b04-b9f1-a7f6e301b3bb-bundle" (OuterVolumeSpecName: "bundle") pod "882395a5-2513-4b04-b9f1-a7f6e301b3bb" (UID: "882395a5-2513-4b04-b9f1-a7f6e301b3bb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:34:54 crc kubenswrapper[4805]: I1203 00:34:54.897698 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/882395a5-2513-4b04-b9f1-a7f6e301b3bb-kube-api-access-v4sjl" (OuterVolumeSpecName: "kube-api-access-v4sjl") pod "882395a5-2513-4b04-b9f1-a7f6e301b3bb" (UID: "882395a5-2513-4b04-b9f1-a7f6e301b3bb"). InnerVolumeSpecName "kube-api-access-v4sjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:34:54 crc kubenswrapper[4805]: I1203 00:34:54.906184 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/882395a5-2513-4b04-b9f1-a7f6e301b3bb-util" (OuterVolumeSpecName: "util") pod "882395a5-2513-4b04-b9f1-a7f6e301b3bb" (UID: "882395a5-2513-4b04-b9f1-a7f6e301b3bb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:34:54 crc kubenswrapper[4805]: I1203 00:34:54.987896 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4sjl\" (UniqueName: \"kubernetes.io/projected/882395a5-2513-4b04-b9f1-a7f6e301b3bb-kube-api-access-v4sjl\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:54 crc kubenswrapper[4805]: I1203 00:34:54.987940 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/882395a5-2513-4b04-b9f1-a7f6e301b3bb-util\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:54 crc kubenswrapper[4805]: I1203 00:34:54.987952 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/882395a5-2513-4b04-b9f1-a7f6e301b3bb-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 00:34:55 crc kubenswrapper[4805]: I1203 00:34:55.452462 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd" event={"ID":"882395a5-2513-4b04-b9f1-a7f6e301b3bb","Type":"ContainerDied","Data":"6b78f207583e84fcad745ccc5aeb195c253a30d30ebd9247feb576b1f8e12f34"} Dec 03 00:34:55 crc kubenswrapper[4805]: I1203 00:34:55.452911 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b78f207583e84fcad745ccc5aeb195c253a30d30ebd9247feb576b1f8e12f34" Dec 03 00:34:55 crc kubenswrapper[4805]: I1203 00:34:55.452570 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vxlnd" Dec 03 00:35:00 crc kubenswrapper[4805]: I1203 00:35:00.386029 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-6c67bcb598-zj4p2"] Dec 03 00:35:00 crc kubenswrapper[4805]: E1203 00:35:00.386705 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="882395a5-2513-4b04-b9f1-a7f6e301b3bb" containerName="util" Dec 03 00:35:00 crc kubenswrapper[4805]: I1203 00:35:00.386822 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="882395a5-2513-4b04-b9f1-a7f6e301b3bb" containerName="util" Dec 03 00:35:00 crc kubenswrapper[4805]: E1203 00:35:00.386834 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab62d29-36cf-45a1-8a41-2f0f35aed445" containerName="util" Dec 03 00:35:00 crc kubenswrapper[4805]: I1203 00:35:00.386842 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab62d29-36cf-45a1-8a41-2f0f35aed445" containerName="util" Dec 03 00:35:00 crc kubenswrapper[4805]: E1203 00:35:00.386854 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="882395a5-2513-4b04-b9f1-a7f6e301b3bb" containerName="extract" Dec 03 00:35:00 crc kubenswrapper[4805]: I1203 00:35:00.386862 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="882395a5-2513-4b04-b9f1-a7f6e301b3bb" containerName="extract" Dec 03 00:35:00 crc kubenswrapper[4805]: E1203 00:35:00.386878 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab62d29-36cf-45a1-8a41-2f0f35aed445" containerName="pull" Dec 03 00:35:00 crc kubenswrapper[4805]: I1203 00:35:00.386887 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab62d29-36cf-45a1-8a41-2f0f35aed445" containerName="pull" Dec 03 00:35:00 crc kubenswrapper[4805]: E1203 00:35:00.386903 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab62d29-36cf-45a1-8a41-2f0f35aed445" containerName="extract" Dec 03 00:35:00 crc kubenswrapper[4805]: I1203 00:35:00.386913 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab62d29-36cf-45a1-8a41-2f0f35aed445" containerName="extract" Dec 03 00:35:00 crc kubenswrapper[4805]: E1203 00:35:00.386929 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="882395a5-2513-4b04-b9f1-a7f6e301b3bb" containerName="pull" Dec 03 00:35:00 crc kubenswrapper[4805]: I1203 00:35:00.386936 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="882395a5-2513-4b04-b9f1-a7f6e301b3bb" containerName="pull" Dec 03 00:35:00 crc kubenswrapper[4805]: I1203 00:35:00.387087 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ab62d29-36cf-45a1-8a41-2f0f35aed445" containerName="extract" Dec 03 00:35:00 crc kubenswrapper[4805]: I1203 00:35:00.387102 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="882395a5-2513-4b04-b9f1-a7f6e301b3bb" containerName="extract" Dec 03 00:35:00 crc kubenswrapper[4805]: I1203 00:35:00.387804 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-6c67bcb598-zj4p2" Dec 03 00:35:00 crc kubenswrapper[4805]: I1203 00:35:00.392302 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-xd7hc" Dec 03 00:35:00 crc kubenswrapper[4805]: I1203 00:35:00.406608 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-6c67bcb598-zj4p2"] Dec 03 00:35:00 crc kubenswrapper[4805]: I1203 00:35:00.475094 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjqpx\" (UniqueName: \"kubernetes.io/projected/cc3ecee9-27a5-431e-8a35-9d62915b2df7-kube-api-access-fjqpx\") pod \"service-telemetry-operator-6c67bcb598-zj4p2\" (UID: \"cc3ecee9-27a5-431e-8a35-9d62915b2df7\") " pod="service-telemetry/service-telemetry-operator-6c67bcb598-zj4p2" Dec 03 00:35:00 crc kubenswrapper[4805]: I1203 00:35:00.475269 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/cc3ecee9-27a5-431e-8a35-9d62915b2df7-runner\") pod \"service-telemetry-operator-6c67bcb598-zj4p2\" (UID: \"cc3ecee9-27a5-431e-8a35-9d62915b2df7\") " pod="service-telemetry/service-telemetry-operator-6c67bcb598-zj4p2" Dec 03 00:35:00 crc kubenswrapper[4805]: I1203 00:35:00.576224 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjqpx\" (UniqueName: \"kubernetes.io/projected/cc3ecee9-27a5-431e-8a35-9d62915b2df7-kube-api-access-fjqpx\") pod \"service-telemetry-operator-6c67bcb598-zj4p2\" (UID: \"cc3ecee9-27a5-431e-8a35-9d62915b2df7\") " pod="service-telemetry/service-telemetry-operator-6c67bcb598-zj4p2" Dec 03 00:35:00 crc kubenswrapper[4805]: I1203 00:35:00.576340 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/cc3ecee9-27a5-431e-8a35-9d62915b2df7-runner\") pod \"service-telemetry-operator-6c67bcb598-zj4p2\" (UID: \"cc3ecee9-27a5-431e-8a35-9d62915b2df7\") " pod="service-telemetry/service-telemetry-operator-6c67bcb598-zj4p2" Dec 03 00:35:00 crc kubenswrapper[4805]: I1203 00:35:00.576912 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/cc3ecee9-27a5-431e-8a35-9d62915b2df7-runner\") pod \"service-telemetry-operator-6c67bcb598-zj4p2\" (UID: \"cc3ecee9-27a5-431e-8a35-9d62915b2df7\") " pod="service-telemetry/service-telemetry-operator-6c67bcb598-zj4p2" Dec 03 00:35:00 crc kubenswrapper[4805]: I1203 00:35:00.604000 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjqpx\" (UniqueName: \"kubernetes.io/projected/cc3ecee9-27a5-431e-8a35-9d62915b2df7-kube-api-access-fjqpx\") pod \"service-telemetry-operator-6c67bcb598-zj4p2\" (UID: \"cc3ecee9-27a5-431e-8a35-9d62915b2df7\") " pod="service-telemetry/service-telemetry-operator-6c67bcb598-zj4p2" Dec 03 00:35:00 crc kubenswrapper[4805]: I1203 00:35:00.702568 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-6c67bcb598-zj4p2" Dec 03 00:35:00 crc kubenswrapper[4805]: I1203 00:35:00.928599 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-6c67bcb598-zj4p2"] Dec 03 00:35:00 crc kubenswrapper[4805]: I1203 00:35:00.938897 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 00:35:01 crc kubenswrapper[4805]: I1203 00:35:01.494051 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-6c67bcb598-zj4p2" event={"ID":"cc3ecee9-27a5-431e-8a35-9d62915b2df7","Type":"ContainerStarted","Data":"778c1e5f3683f08c913d24a5c0ee00c9859576225ce7368f7700a9dc79281549"} Dec 03 00:35:03 crc kubenswrapper[4805]: I1203 00:35:03.204902 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-699b8db96d-czl52"] Dec 03 00:35:03 crc kubenswrapper[4805]: I1203 00:35:03.206293 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-699b8db96d-czl52" Dec 03 00:35:03 crc kubenswrapper[4805]: I1203 00:35:03.211338 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-nl2n2" Dec 03 00:35:03 crc kubenswrapper[4805]: I1203 00:35:03.264315 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-699b8db96d-czl52"] Dec 03 00:35:03 crc kubenswrapper[4805]: I1203 00:35:03.332047 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/edc06863-bc4d-47f8-a0ba-516d554d4343-runner\") pod \"smart-gateway-operator-699b8db96d-czl52\" (UID: \"edc06863-bc4d-47f8-a0ba-516d554d4343\") " pod="service-telemetry/smart-gateway-operator-699b8db96d-czl52" Dec 03 00:35:03 crc kubenswrapper[4805]: I1203 00:35:03.332132 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsgbl\" (UniqueName: \"kubernetes.io/projected/edc06863-bc4d-47f8-a0ba-516d554d4343-kube-api-access-qsgbl\") pod \"smart-gateway-operator-699b8db96d-czl52\" (UID: \"edc06863-bc4d-47f8-a0ba-516d554d4343\") " pod="service-telemetry/smart-gateway-operator-699b8db96d-czl52" Dec 03 00:35:03 crc kubenswrapper[4805]: I1203 00:35:03.434284 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsgbl\" (UniqueName: \"kubernetes.io/projected/edc06863-bc4d-47f8-a0ba-516d554d4343-kube-api-access-qsgbl\") pod \"smart-gateway-operator-699b8db96d-czl52\" (UID: \"edc06863-bc4d-47f8-a0ba-516d554d4343\") " pod="service-telemetry/smart-gateway-operator-699b8db96d-czl52" Dec 03 00:35:03 crc kubenswrapper[4805]: I1203 00:35:03.434392 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/edc06863-bc4d-47f8-a0ba-516d554d4343-runner\") pod \"smart-gateway-operator-699b8db96d-czl52\" (UID: \"edc06863-bc4d-47f8-a0ba-516d554d4343\") " pod="service-telemetry/smart-gateway-operator-699b8db96d-czl52" Dec 03 00:35:03 crc kubenswrapper[4805]: I1203 00:35:03.435018 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/edc06863-bc4d-47f8-a0ba-516d554d4343-runner\") pod \"smart-gateway-operator-699b8db96d-czl52\" (UID: \"edc06863-bc4d-47f8-a0ba-516d554d4343\") " pod="service-telemetry/smart-gateway-operator-699b8db96d-czl52" Dec 03 00:35:03 crc kubenswrapper[4805]: I1203 00:35:03.486235 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsgbl\" (UniqueName: \"kubernetes.io/projected/edc06863-bc4d-47f8-a0ba-516d554d4343-kube-api-access-qsgbl\") pod \"smart-gateway-operator-699b8db96d-czl52\" (UID: \"edc06863-bc4d-47f8-a0ba-516d554d4343\") " pod="service-telemetry/smart-gateway-operator-699b8db96d-czl52" Dec 03 00:35:03 crc kubenswrapper[4805]: I1203 00:35:03.580307 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-699b8db96d-czl52" Dec 03 00:35:04 crc kubenswrapper[4805]: I1203 00:35:04.117711 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-699b8db96d-czl52"] Dec 03 00:35:04 crc kubenswrapper[4805]: I1203 00:35:04.427940 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:35:04 crc kubenswrapper[4805]: E1203 00:35:04.428171 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:35:04 crc kubenswrapper[4805]: I1203 00:35:04.523282 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-699b8db96d-czl52" event={"ID":"edc06863-bc4d-47f8-a0ba-516d554d4343","Type":"ContainerStarted","Data":"a18b0c5a1d0d753d7f7d44a969fad9d4a77b1c18e3ff203342af90cdc1e110fe"} Dec 03 00:35:19 crc kubenswrapper[4805]: I1203 00:35:19.423810 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:35:19 crc kubenswrapper[4805]: E1203 00:35:19.424612 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:35:21 crc kubenswrapper[4805]: E1203 00:35:21.360810 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/service-telemetry-operator:stable-1.5" Dec 03 00:35:21 crc kubenswrapper[4805]: E1203 00:35:21.361508 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/infrawatch/service-telemetry-operator:stable-1.5,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:service-telemetry-operator,ValueFrom:nil,},EnvVar{Name:ANSIBLE_GATHERING,Value:explicit,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_WEBHOOK_SNMP_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/prometheus-webhook-snmp:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OAUTH_PROXY_IMAGE,Value:quay.io/openshift/origin-oauth-proxy:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_IMAGE,Value:quay.io/prometheus/prometheus:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ALERTMANAGER_IMAGE,Value:quay.io/prometheus/alertmanager:latest,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:service-telemetry-operator.v1.5.1764721933,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:runner,ReadOnly:false,MountPath:/tmp/ansible-operator/runner,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fjqpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-telemetry-operator-6c67bcb598-zj4p2_service-telemetry(cc3ecee9-27a5-431e-8a35-9d62915b2df7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 00:35:21 crc kubenswrapper[4805]: E1203 00:35:21.362761 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/service-telemetry-operator-6c67bcb598-zj4p2" podUID="cc3ecee9-27a5-431e-8a35-9d62915b2df7" Dec 03 00:35:21 crc kubenswrapper[4805]: I1203 00:35:21.667753 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-699b8db96d-czl52" event={"ID":"edc06863-bc4d-47f8-a0ba-516d554d4343","Type":"ContainerStarted","Data":"a3aad54cff363705bfb3e6fb88e2ef3fcc3bb238401e375c580507b621aad2e8"} Dec 03 00:35:21 crc kubenswrapper[4805]: E1203 00:35:21.669783 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/service-telemetry-operator:stable-1.5\\\"\"" pod="service-telemetry/service-telemetry-operator-6c67bcb598-zj4p2" podUID="cc3ecee9-27a5-431e-8a35-9d62915b2df7" Dec 03 00:35:30 crc kubenswrapper[4805]: I1203 00:35:30.423112 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:35:30 crc kubenswrapper[4805]: E1203 00:35:30.424386 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:35:34 crc kubenswrapper[4805]: I1203 00:35:34.447975 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-699b8db96d-czl52" podStartSLOduration=14.128422126 podStartE2EDuration="31.447953425s" podCreationTimestamp="2025-12-03 00:35:03 +0000 UTC" firstStartedPulling="2025-12-03 00:35:04.112297673 +0000 UTC m=+1727.961260279" lastFinishedPulling="2025-12-03 00:35:21.431828962 +0000 UTC m=+1745.280791578" observedRunningTime="2025-12-03 00:35:21.716651735 +0000 UTC m=+1745.565614341" watchObservedRunningTime="2025-12-03 00:35:34.447953425 +0000 UTC m=+1758.296916031" Dec 03 00:35:40 crc kubenswrapper[4805]: I1203 00:35:40.841338 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-6c67bcb598-zj4p2" event={"ID":"cc3ecee9-27a5-431e-8a35-9d62915b2df7","Type":"ContainerStarted","Data":"248ea64155038b2b322b94854e3c2fcf34dcd647ad86031aa73b4c756394caaf"} Dec 03 00:35:40 crc kubenswrapper[4805]: I1203 00:35:40.861144 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-6c67bcb598-zj4p2" podStartSLOduration=1.5085601579999999 podStartE2EDuration="40.861115686s" podCreationTimestamp="2025-12-03 00:35:00 +0000 UTC" firstStartedPulling="2025-12-03 00:35:00.938614563 +0000 UTC m=+1724.787577169" lastFinishedPulling="2025-12-03 00:35:40.291170051 +0000 UTC m=+1764.140132697" observedRunningTime="2025-12-03 00:35:40.856054332 +0000 UTC m=+1764.705016958" watchObservedRunningTime="2025-12-03 00:35:40.861115686 +0000 UTC m=+1764.710078302" Dec 03 00:35:41 crc kubenswrapper[4805]: I1203 00:35:41.423009 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:35:41 crc kubenswrapper[4805]: E1203 00:35:41.423496 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:35:55 crc kubenswrapper[4805]: I1203 00:35:55.423573 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:35:55 crc kubenswrapper[4805]: E1203 00:35:55.424684 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.839891 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-zh2dz"] Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.841654 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.845530 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.846293 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.846510 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-2vx6q" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.846671 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.846694 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.847987 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.848286 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.857423 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-sasl-users\") pod \"default-interconnect-68864d46cb-zh2dz\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.857911 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-zh2dz\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.858165 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-zh2dz\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.858422 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmrmz\" (UniqueName: \"kubernetes.io/projected/91a28bb2-1342-478f-8a0f-77aec3120166-kube-api-access-wmrmz\") pod \"default-interconnect-68864d46cb-zh2dz\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.858625 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/91a28bb2-1342-478f-8a0f-77aec3120166-sasl-config\") pod \"default-interconnect-68864d46cb-zh2dz\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.858776 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-zh2dz\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.858936 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-zh2dz\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.861962 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-zh2dz"] Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.960631 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmrmz\" (UniqueName: \"kubernetes.io/projected/91a28bb2-1342-478f-8a0f-77aec3120166-kube-api-access-wmrmz\") pod \"default-interconnect-68864d46cb-zh2dz\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.960707 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/91a28bb2-1342-478f-8a0f-77aec3120166-sasl-config\") pod \"default-interconnect-68864d46cb-zh2dz\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.960757 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-zh2dz\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.960807 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-zh2dz\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.961029 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-sasl-users\") pod \"default-interconnect-68864d46cb-zh2dz\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.961049 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-zh2dz\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.961078 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-zh2dz\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.963029 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/91a28bb2-1342-478f-8a0f-77aec3120166-sasl-config\") pod \"default-interconnect-68864d46cb-zh2dz\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.968691 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-zh2dz\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.968801 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-sasl-users\") pod \"default-interconnect-68864d46cb-zh2dz\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.972873 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-zh2dz\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.973016 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-zh2dz\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.973272 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-zh2dz\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:04 crc kubenswrapper[4805]: I1203 00:36:04.987903 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmrmz\" (UniqueName: \"kubernetes.io/projected/91a28bb2-1342-478f-8a0f-77aec3120166-kube-api-access-wmrmz\") pod \"default-interconnect-68864d46cb-zh2dz\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:05 crc kubenswrapper[4805]: I1203 00:36:05.186039 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:36:05 crc kubenswrapper[4805]: I1203 00:36:05.428790 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-zh2dz"] Dec 03 00:36:06 crc kubenswrapper[4805]: I1203 00:36:06.047178 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" event={"ID":"91a28bb2-1342-478f-8a0f-77aec3120166","Type":"ContainerStarted","Data":"5f413199aaccc811c3eb6db131d4a8f4c21c2a83605e8b1b262b8f08d8a6fcbb"} Dec 03 00:36:07 crc kubenswrapper[4805]: I1203 00:36:07.423899 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:36:07 crc kubenswrapper[4805]: E1203 00:36:07.424512 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:36:11 crc kubenswrapper[4805]: I1203 00:36:11.099231 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" event={"ID":"91a28bb2-1342-478f-8a0f-77aec3120166","Type":"ContainerStarted","Data":"6ef6789b1c0468f712ceea0872f89afadf281943a396893694b884476a65042c"} Dec 03 00:36:11 crc kubenswrapper[4805]: I1203 00:36:11.125849 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" podStartSLOduration=1.824480346 podStartE2EDuration="7.125824553s" podCreationTimestamp="2025-12-03 00:36:04 +0000 UTC" firstStartedPulling="2025-12-03 00:36:05.438084076 +0000 UTC m=+1789.287046682" lastFinishedPulling="2025-12-03 00:36:10.739428293 +0000 UTC m=+1794.588390889" observedRunningTime="2025-12-03 00:36:11.117764916 +0000 UTC m=+1794.966727572" watchObservedRunningTime="2025-12-03 00:36:11.125824553 +0000 UTC m=+1794.974787159" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.072894 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.076712 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.079831 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.080146 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.080656 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.081787 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.081795 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-558fh" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.082001 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.082139 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.082171 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.089384 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.237302 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l8x6\" (UniqueName: \"kubernetes.io/projected/0deccfcc-828c-479b-a0dd-a42fa9146444-kube-api-access-5l8x6\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.237365 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0deccfcc-828c-479b-a0dd-a42fa9146444-tls-assets\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.237559 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0deccfcc-828c-479b-a0dd-a42fa9146444-config\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.237667 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0deccfcc-828c-479b-a0dd-a42fa9146444-config-out\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.237692 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0deccfcc-828c-479b-a0dd-a42fa9146444-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.237713 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/0deccfcc-828c-479b-a0dd-a42fa9146444-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.237837 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-551e9c86-117f-454e-a4ce-f28ba3987d8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e9c86-117f-454e-a4ce-f28ba3987d8c\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.237863 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0deccfcc-828c-479b-a0dd-a42fa9146444-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.237894 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0deccfcc-828c-479b-a0dd-a42fa9146444-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.237916 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0deccfcc-828c-479b-a0dd-a42fa9146444-web-config\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.339658 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l8x6\" (UniqueName: \"kubernetes.io/projected/0deccfcc-828c-479b-a0dd-a42fa9146444-kube-api-access-5l8x6\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.339739 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0deccfcc-828c-479b-a0dd-a42fa9146444-tls-assets\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.339767 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0deccfcc-828c-479b-a0dd-a42fa9146444-config\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.339792 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0deccfcc-828c-479b-a0dd-a42fa9146444-config-out\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.339810 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0deccfcc-828c-479b-a0dd-a42fa9146444-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.339832 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/0deccfcc-828c-479b-a0dd-a42fa9146444-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.339871 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-551e9c86-117f-454e-a4ce-f28ba3987d8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e9c86-117f-454e-a4ce-f28ba3987d8c\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.339891 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0deccfcc-828c-479b-a0dd-a42fa9146444-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.339917 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0deccfcc-828c-479b-a0dd-a42fa9146444-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.339940 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0deccfcc-828c-479b-a0dd-a42fa9146444-web-config\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.341109 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0deccfcc-828c-479b-a0dd-a42fa9146444-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: E1203 00:36:16.341223 4805 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 03 00:36:16 crc kubenswrapper[4805]: E1203 00:36:16.341358 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0deccfcc-828c-479b-a0dd-a42fa9146444-secret-default-prometheus-proxy-tls podName:0deccfcc-828c-479b-a0dd-a42fa9146444 nodeName:}" failed. No retries permitted until 2025-12-03 00:36:16.841322114 +0000 UTC m=+1800.690284760 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/0deccfcc-828c-479b-a0dd-a42fa9146444-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "0deccfcc-828c-479b-a0dd-a42fa9146444") : secret "default-prometheus-proxy-tls" not found Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.342926 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0deccfcc-828c-479b-a0dd-a42fa9146444-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.346335 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/0deccfcc-828c-479b-a0dd-a42fa9146444-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.346706 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0deccfcc-828c-479b-a0dd-a42fa9146444-web-config\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.346833 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0deccfcc-828c-479b-a0dd-a42fa9146444-tls-assets\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.347688 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0deccfcc-828c-479b-a0dd-a42fa9146444-config-out\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.348761 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.348797 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-551e9c86-117f-454e-a4ce-f28ba3987d8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e9c86-117f-454e-a4ce-f28ba3987d8c\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cb2b5eb6f84f9caa7d6e8c16777854af13c86bf37441b7846d35661eeb4e496f/globalmount\"" pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.374458 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0deccfcc-828c-479b-a0dd-a42fa9146444-config\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.377295 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l8x6\" (UniqueName: \"kubernetes.io/projected/0deccfcc-828c-479b-a0dd-a42fa9146444-kube-api-access-5l8x6\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.388279 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-551e9c86-117f-454e-a4ce-f28ba3987d8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-551e9c86-117f-454e-a4ce-f28ba3987d8c\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: I1203 00:36:16.848683 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0deccfcc-828c-479b-a0dd-a42fa9146444-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:16 crc kubenswrapper[4805]: E1203 00:36:16.849161 4805 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 03 00:36:16 crc kubenswrapper[4805]: E1203 00:36:16.849336 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0deccfcc-828c-479b-a0dd-a42fa9146444-secret-default-prometheus-proxy-tls podName:0deccfcc-828c-479b-a0dd-a42fa9146444 nodeName:}" failed. No retries permitted until 2025-12-03 00:36:17.849297918 +0000 UTC m=+1801.698260564 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/0deccfcc-828c-479b-a0dd-a42fa9146444-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "0deccfcc-828c-479b-a0dd-a42fa9146444") : secret "default-prometheus-proxy-tls" not found Dec 03 00:36:17 crc kubenswrapper[4805]: I1203 00:36:17.866083 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0deccfcc-828c-479b-a0dd-a42fa9146444-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:17 crc kubenswrapper[4805]: I1203 00:36:17.876076 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0deccfcc-828c-479b-a0dd-a42fa9146444-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"0deccfcc-828c-479b-a0dd-a42fa9146444\") " pod="service-telemetry/prometheus-default-0" Dec 03 00:36:17 crc kubenswrapper[4805]: I1203 00:36:17.908977 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-558fh" Dec 03 00:36:17 crc kubenswrapper[4805]: I1203 00:36:17.917298 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 03 00:36:18 crc kubenswrapper[4805]: I1203 00:36:18.189540 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 03 00:36:19 crc kubenswrapper[4805]: I1203 00:36:19.174178 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0deccfcc-828c-479b-a0dd-a42fa9146444","Type":"ContainerStarted","Data":"9cba0cd1b4cd1c6a3abf24b1c3ed451fe4c9024c0009f6e893e803a961813d14"} Dec 03 00:36:19 crc kubenswrapper[4805]: I1203 00:36:19.422874 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:36:19 crc kubenswrapper[4805]: E1203 00:36:19.423141 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:36:23 crc kubenswrapper[4805]: I1203 00:36:23.211597 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0deccfcc-828c-479b-a0dd-a42fa9146444","Type":"ContainerStarted","Data":"6afeab5485b19c54ecb383ed6be3217afc09bb0fb5135efce625f4cf91aa0cd9"} Dec 03 00:36:26 crc kubenswrapper[4805]: I1203 00:36:26.742125 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-vgprn"] Dec 03 00:36:26 crc kubenswrapper[4805]: I1203 00:36:26.743351 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-vgprn" Dec 03 00:36:26 crc kubenswrapper[4805]: I1203 00:36:26.764371 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-vgprn"] Dec 03 00:36:26 crc kubenswrapper[4805]: I1203 00:36:26.928439 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fqhk\" (UniqueName: \"kubernetes.io/projected/5cf29272-b2e7-4917-a402-d27cd7918749-kube-api-access-2fqhk\") pod \"default-snmp-webhook-6856cfb745-vgprn\" (UID: \"5cf29272-b2e7-4917-a402-d27cd7918749\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-vgprn" Dec 03 00:36:27 crc kubenswrapper[4805]: I1203 00:36:27.030573 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fqhk\" (UniqueName: \"kubernetes.io/projected/5cf29272-b2e7-4917-a402-d27cd7918749-kube-api-access-2fqhk\") pod \"default-snmp-webhook-6856cfb745-vgprn\" (UID: \"5cf29272-b2e7-4917-a402-d27cd7918749\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-vgprn" Dec 03 00:36:27 crc kubenswrapper[4805]: I1203 00:36:27.053298 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fqhk\" (UniqueName: \"kubernetes.io/projected/5cf29272-b2e7-4917-a402-d27cd7918749-kube-api-access-2fqhk\") pod \"default-snmp-webhook-6856cfb745-vgprn\" (UID: \"5cf29272-b2e7-4917-a402-d27cd7918749\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-vgprn" Dec 03 00:36:27 crc kubenswrapper[4805]: I1203 00:36:27.081925 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-vgprn" Dec 03 00:36:27 crc kubenswrapper[4805]: W1203 00:36:27.592481 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cf29272_b2e7_4917_a402_d27cd7918749.slice/crio-d6f682f2437874e16fa8be6cb185a7d5309ad82a46ed3afc9108ed9ea0c30cc1 WatchSource:0}: Error finding container d6f682f2437874e16fa8be6cb185a7d5309ad82a46ed3afc9108ed9ea0c30cc1: Status 404 returned error can't find the container with id d6f682f2437874e16fa8be6cb185a7d5309ad82a46ed3afc9108ed9ea0c30cc1 Dec 03 00:36:27 crc kubenswrapper[4805]: I1203 00:36:27.595567 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-vgprn"] Dec 03 00:36:28 crc kubenswrapper[4805]: I1203 00:36:28.265028 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-vgprn" event={"ID":"5cf29272-b2e7-4917-a402-d27cd7918749","Type":"ContainerStarted","Data":"d6f682f2437874e16fa8be6cb185a7d5309ad82a46ed3afc9108ed9ea0c30cc1"} Dec 03 00:36:29 crc kubenswrapper[4805]: I1203 00:36:29.079364 4805 scope.go:117] "RemoveContainer" containerID="3e0458a6db2680d9cca363e08aee4496ca10266ff48cb36a45d07be986344e9b" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.416405 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.418363 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.420903 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.421277 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.421748 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-wmsc8" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.421858 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.421992 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.422015 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.450581 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.488942 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/44d5a53a-114f-4261-a53e-630fc66811fc-config-out\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.489024 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-web-config\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.489060 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.489349 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.489499 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-config-volume\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.489718 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.489803 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrk8c\" (UniqueName: \"kubernetes.io/projected/44d5a53a-114f-4261-a53e-630fc66811fc-kube-api-access-mrk8c\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.489886 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d6f76397-a076-4fbd-8efc-9b88c5bc6cd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d6f76397-a076-4fbd-8efc-9b88c5bc6cd0\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.489959 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/44d5a53a-114f-4261-a53e-630fc66811fc-tls-assets\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.591422 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.591525 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrk8c\" (UniqueName: \"kubernetes.io/projected/44d5a53a-114f-4261-a53e-630fc66811fc-kube-api-access-mrk8c\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.591574 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d6f76397-a076-4fbd-8efc-9b88c5bc6cd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d6f76397-a076-4fbd-8efc-9b88c5bc6cd0\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.591641 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/44d5a53a-114f-4261-a53e-630fc66811fc-tls-assets\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.591666 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/44d5a53a-114f-4261-a53e-630fc66811fc-config-out\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.591693 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-web-config\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.591717 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.591756 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.591790 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-config-volume\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: E1203 00:36:30.592422 4805 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 03 00:36:30 crc kubenswrapper[4805]: E1203 00:36:30.592546 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-secret-default-alertmanager-proxy-tls podName:44d5a53a-114f-4261-a53e-630fc66811fc nodeName:}" failed. No retries permitted until 2025-12-03 00:36:31.092512334 +0000 UTC m=+1814.941474980 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "44d5a53a-114f-4261-a53e-630fc66811fc") : secret "default-alertmanager-proxy-tls" not found Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.600794 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-config-volume\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.600843 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.600890 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/44d5a53a-114f-4261-a53e-630fc66811fc-config-out\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.600949 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d6f76397-a076-4fbd-8efc-9b88c5bc6cd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d6f76397-a076-4fbd-8efc-9b88c5bc6cd0\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4de0d5a69a22d3fb6cfbd8b5c73a142e3a386a226074c512a059c0e4a300b377/globalmount\"" pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.606019 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/44d5a53a-114f-4261-a53e-630fc66811fc-tls-assets\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.608881 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-web-config\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.612164 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.618320 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.622064 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrk8c\" (UniqueName: \"kubernetes.io/projected/44d5a53a-114f-4261-a53e-630fc66811fc-kube-api-access-mrk8c\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:30 crc kubenswrapper[4805]: I1203 00:36:30.672509 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d6f76397-a076-4fbd-8efc-9b88c5bc6cd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d6f76397-a076-4fbd-8efc-9b88c5bc6cd0\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:31 crc kubenswrapper[4805]: I1203 00:36:31.099876 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:31 crc kubenswrapper[4805]: E1203 00:36:31.100132 4805 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 03 00:36:31 crc kubenswrapper[4805]: E1203 00:36:31.101019 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-secret-default-alertmanager-proxy-tls podName:44d5a53a-114f-4261-a53e-630fc66811fc nodeName:}" failed. No retries permitted until 2025-12-03 00:36:32.100989011 +0000 UTC m=+1815.949951617 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "44d5a53a-114f-4261-a53e-630fc66811fc") : secret "default-alertmanager-proxy-tls" not found Dec 03 00:36:32 crc kubenswrapper[4805]: I1203 00:36:32.120373 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:32 crc kubenswrapper[4805]: E1203 00:36:32.120859 4805 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 03 00:36:32 crc kubenswrapper[4805]: E1203 00:36:32.121084 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-secret-default-alertmanager-proxy-tls podName:44d5a53a-114f-4261-a53e-630fc66811fc nodeName:}" failed. No retries permitted until 2025-12-03 00:36:34.12103878 +0000 UTC m=+1817.970001456 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "44d5a53a-114f-4261-a53e-630fc66811fc") : secret "default-alertmanager-proxy-tls" not found Dec 03 00:36:32 crc kubenswrapper[4805]: I1203 00:36:32.300016 4805 generic.go:334] "Generic (PLEG): container finished" podID="0deccfcc-828c-479b-a0dd-a42fa9146444" containerID="6afeab5485b19c54ecb383ed6be3217afc09bb0fb5135efce625f4cf91aa0cd9" exitCode=0 Dec 03 00:36:32 crc kubenswrapper[4805]: I1203 00:36:32.300106 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0deccfcc-828c-479b-a0dd-a42fa9146444","Type":"ContainerDied","Data":"6afeab5485b19c54ecb383ed6be3217afc09bb0fb5135efce625f4cf91aa0cd9"} Dec 03 00:36:32 crc kubenswrapper[4805]: I1203 00:36:32.792898 4805 scope.go:117] "RemoveContainer" containerID="4b11d9737c68503daf09fbce25b25df23c893fd904cccbf4d8b5c5c081dd542d" Dec 03 00:36:34 crc kubenswrapper[4805]: I1203 00:36:34.163236 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:34 crc kubenswrapper[4805]: I1203 00:36:34.173326 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/44d5a53a-114f-4261-a53e-630fc66811fc-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"44d5a53a-114f-4261-a53e-630fc66811fc\") " pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:34 crc kubenswrapper[4805]: I1203 00:36:34.354541 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 03 00:36:34 crc kubenswrapper[4805]: I1203 00:36:34.423364 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:36:34 crc kubenswrapper[4805]: E1203 00:36:34.423629 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:36:35 crc kubenswrapper[4805]: I1203 00:36:35.749464 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 03 00:36:36 crc kubenswrapper[4805]: I1203 00:36:36.346415 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-vgprn" event={"ID":"5cf29272-b2e7-4917-a402-d27cd7918749","Type":"ContainerStarted","Data":"ca912730a90b03a0a61ecae65b49a709bb43f31defae0a6fcc314fe0a5a72d27"} Dec 03 00:36:36 crc kubenswrapper[4805]: I1203 00:36:36.348189 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"44d5a53a-114f-4261-a53e-630fc66811fc","Type":"ContainerStarted","Data":"82795cea0b48f6b4e53654b3279c1129b3503a20a00003dad32a62436403f7d0"} Dec 03 00:36:36 crc kubenswrapper[4805]: I1203 00:36:36.362667 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6856cfb745-vgprn" podStartSLOduration=2.538262123 podStartE2EDuration="10.362650229s" podCreationTimestamp="2025-12-03 00:36:26 +0000 UTC" firstStartedPulling="2025-12-03 00:36:27.596484272 +0000 UTC m=+1811.445446878" lastFinishedPulling="2025-12-03 00:36:35.420872358 +0000 UTC m=+1819.269834984" observedRunningTime="2025-12-03 00:36:36.360403914 +0000 UTC m=+1820.209366520" watchObservedRunningTime="2025-12-03 00:36:36.362650229 +0000 UTC m=+1820.211612825" Dec 03 00:36:38 crc kubenswrapper[4805]: I1203 00:36:38.364033 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"44d5a53a-114f-4261-a53e-630fc66811fc","Type":"ContainerStarted","Data":"1f0dfb99e8b5025b546158c621fcaa47b06d0b23251c9dd3e7a83b641fc51f73"} Dec 03 00:36:41 crc kubenswrapper[4805]: I1203 00:36:41.392554 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0deccfcc-828c-479b-a0dd-a42fa9146444","Type":"ContainerStarted","Data":"e5213a69165a277e391dd5049a8f9d415c0c78a5c97f33529ea63b626fea49ee"} Dec 03 00:36:43 crc kubenswrapper[4805]: I1203 00:36:43.420318 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0deccfcc-828c-479b-a0dd-a42fa9146444","Type":"ContainerStarted","Data":"b5a27c3b4011316b023d41e0afd43d2951d65a1b1c26e987a88c751331f0c1c9"} Dec 03 00:36:44 crc kubenswrapper[4805]: I1203 00:36:44.443068 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26"] Dec 03 00:36:44 crc kubenswrapper[4805]: I1203 00:36:44.445662 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" Dec 03 00:36:44 crc kubenswrapper[4805]: I1203 00:36:44.449965 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Dec 03 00:36:44 crc kubenswrapper[4805]: I1203 00:36:44.451491 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Dec 03 00:36:44 crc kubenswrapper[4805]: I1203 00:36:44.451674 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-p4j4r" Dec 03 00:36:44 crc kubenswrapper[4805]: I1203 00:36:44.451739 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Dec 03 00:36:44 crc kubenswrapper[4805]: I1203 00:36:44.457449 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26"] Dec 03 00:36:44 crc kubenswrapper[4805]: I1203 00:36:44.553517 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26\" (UID: \"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" Dec 03 00:36:44 crc kubenswrapper[4805]: I1203 00:36:44.553909 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26\" (UID: \"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" Dec 03 00:36:44 crc kubenswrapper[4805]: I1203 00:36:44.553996 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26\" (UID: \"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" Dec 03 00:36:44 crc kubenswrapper[4805]: I1203 00:36:44.554135 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbsz8\" (UniqueName: \"kubernetes.io/projected/2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b-kube-api-access-nbsz8\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26\" (UID: \"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" Dec 03 00:36:44 crc kubenswrapper[4805]: I1203 00:36:44.554279 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26\" (UID: \"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" Dec 03 00:36:44 crc kubenswrapper[4805]: I1203 00:36:44.656332 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26\" (UID: \"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" Dec 03 00:36:44 crc kubenswrapper[4805]: I1203 00:36:44.656397 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26\" (UID: \"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" Dec 03 00:36:44 crc kubenswrapper[4805]: I1203 00:36:44.656452 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbsz8\" (UniqueName: \"kubernetes.io/projected/2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b-kube-api-access-nbsz8\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26\" (UID: \"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" Dec 03 00:36:44 crc kubenswrapper[4805]: I1203 00:36:44.656490 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26\" (UID: \"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" Dec 03 00:36:44 crc kubenswrapper[4805]: E1203 00:36:44.656518 4805 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 03 00:36:44 crc kubenswrapper[4805]: I1203 00:36:44.656555 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26\" (UID: \"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" Dec 03 00:36:44 crc kubenswrapper[4805]: E1203 00:36:44.656595 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b-default-cloud1-coll-meter-proxy-tls podName:2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b nodeName:}" failed. No retries permitted until 2025-12-03 00:36:45.156570533 +0000 UTC m=+1829.005533139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" (UID: "2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 03 00:36:44 crc kubenswrapper[4805]: I1203 00:36:44.661363 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26\" (UID: \"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" Dec 03 00:36:44 crc kubenswrapper[4805]: I1203 00:36:44.661958 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26\" (UID: \"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" Dec 03 00:36:44 crc kubenswrapper[4805]: I1203 00:36:44.670190 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26\" (UID: \"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" Dec 03 00:36:44 crc kubenswrapper[4805]: I1203 00:36:44.677030 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbsz8\" (UniqueName: \"kubernetes.io/projected/2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b-kube-api-access-nbsz8\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26\" (UID: \"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" Dec 03 00:36:45 crc kubenswrapper[4805]: I1203 00:36:45.164224 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26\" (UID: \"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" Dec 03 00:36:45 crc kubenswrapper[4805]: E1203 00:36:45.164432 4805 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 03 00:36:45 crc kubenswrapper[4805]: E1203 00:36:45.164568 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b-default-cloud1-coll-meter-proxy-tls podName:2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b nodeName:}" failed. No retries permitted until 2025-12-03 00:36:46.164534886 +0000 UTC m=+1830.013497492 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" (UID: "2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 03 00:36:45 crc kubenswrapper[4805]: I1203 00:36:45.423926 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:36:45 crc kubenswrapper[4805]: E1203 00:36:45.424161 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:36:46 crc kubenswrapper[4805]: I1203 00:36:46.218222 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26\" (UID: \"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" Dec 03 00:36:46 crc kubenswrapper[4805]: I1203 00:36:46.232534 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26\" (UID: \"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" Dec 03 00:36:46 crc kubenswrapper[4805]: I1203 00:36:46.268782 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" Dec 03 00:36:46 crc kubenswrapper[4805]: I1203 00:36:46.452009 4805 generic.go:334] "Generic (PLEG): container finished" podID="44d5a53a-114f-4261-a53e-630fc66811fc" containerID="1f0dfb99e8b5025b546158c621fcaa47b06d0b23251c9dd3e7a83b641fc51f73" exitCode=0 Dec 03 00:36:46 crc kubenswrapper[4805]: I1203 00:36:46.452055 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"44d5a53a-114f-4261-a53e-630fc66811fc","Type":"ContainerDied","Data":"1f0dfb99e8b5025b546158c621fcaa47b06d0b23251c9dd3e7a83b641fc51f73"} Dec 03 00:36:47 crc kubenswrapper[4805]: I1203 00:36:47.249092 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6"] Dec 03 00:36:47 crc kubenswrapper[4805]: I1203 00:36:47.250836 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" Dec 03 00:36:47 crc kubenswrapper[4805]: I1203 00:36:47.254230 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Dec 03 00:36:47 crc kubenswrapper[4805]: I1203 00:36:47.256075 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Dec 03 00:36:47 crc kubenswrapper[4805]: I1203 00:36:47.262624 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6"] Dec 03 00:36:47 crc kubenswrapper[4805]: I1203 00:36:47.336052 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/df2a6e96-9e6c-4b4c-be6c-7ecdc2372714-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6\" (UID: \"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" Dec 03 00:36:47 crc kubenswrapper[4805]: I1203 00:36:47.336106 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/df2a6e96-9e6c-4b4c-be6c-7ecdc2372714-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6\" (UID: \"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" Dec 03 00:36:47 crc kubenswrapper[4805]: I1203 00:36:47.336431 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/df2a6e96-9e6c-4b4c-be6c-7ecdc2372714-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6\" (UID: \"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" Dec 03 00:36:47 crc kubenswrapper[4805]: I1203 00:36:47.336658 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6nlh\" (UniqueName: \"kubernetes.io/projected/df2a6e96-9e6c-4b4c-be6c-7ecdc2372714-kube-api-access-q6nlh\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6\" (UID: \"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" Dec 03 00:36:47 crc kubenswrapper[4805]: I1203 00:36:47.336766 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/df2a6e96-9e6c-4b4c-be6c-7ecdc2372714-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6\" (UID: \"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" Dec 03 00:36:47 crc kubenswrapper[4805]: I1203 00:36:47.438247 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/df2a6e96-9e6c-4b4c-be6c-7ecdc2372714-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6\" (UID: \"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" Dec 03 00:36:47 crc kubenswrapper[4805]: I1203 00:36:47.438314 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/df2a6e96-9e6c-4b4c-be6c-7ecdc2372714-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6\" (UID: \"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" Dec 03 00:36:47 crc kubenswrapper[4805]: I1203 00:36:47.438367 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/df2a6e96-9e6c-4b4c-be6c-7ecdc2372714-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6\" (UID: \"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" Dec 03 00:36:47 crc kubenswrapper[4805]: I1203 00:36:47.438410 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6nlh\" (UniqueName: \"kubernetes.io/projected/df2a6e96-9e6c-4b4c-be6c-7ecdc2372714-kube-api-access-q6nlh\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6\" (UID: \"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" Dec 03 00:36:47 crc kubenswrapper[4805]: I1203 00:36:47.438853 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/df2a6e96-9e6c-4b4c-be6c-7ecdc2372714-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6\" (UID: \"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" Dec 03 00:36:47 crc kubenswrapper[4805]: I1203 00:36:47.439260 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/df2a6e96-9e6c-4b4c-be6c-7ecdc2372714-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6\" (UID: \"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" Dec 03 00:36:47 crc kubenswrapper[4805]: E1203 00:36:47.439419 4805 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 03 00:36:47 crc kubenswrapper[4805]: E1203 00:36:47.439563 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df2a6e96-9e6c-4b4c-be6c-7ecdc2372714-default-cloud1-ceil-meter-proxy-tls podName:df2a6e96-9e6c-4b4c-be6c-7ecdc2372714 nodeName:}" failed. No retries permitted until 2025-12-03 00:36:47.93951283 +0000 UTC m=+1831.788475436 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/df2a6e96-9e6c-4b4c-be6c-7ecdc2372714-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" (UID: "df2a6e96-9e6c-4b4c-be6c-7ecdc2372714") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 03 00:36:47 crc kubenswrapper[4805]: I1203 00:36:47.439448 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/df2a6e96-9e6c-4b4c-be6c-7ecdc2372714-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6\" (UID: \"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" Dec 03 00:36:47 crc kubenswrapper[4805]: I1203 00:36:47.456437 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6nlh\" (UniqueName: \"kubernetes.io/projected/df2a6e96-9e6c-4b4c-be6c-7ecdc2372714-kube-api-access-q6nlh\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6\" (UID: \"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" Dec 03 00:36:47 crc kubenswrapper[4805]: I1203 00:36:47.457624 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/df2a6e96-9e6c-4b4c-be6c-7ecdc2372714-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6\" (UID: \"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" Dec 03 00:36:47 crc kubenswrapper[4805]: I1203 00:36:47.946323 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/df2a6e96-9e6c-4b4c-be6c-7ecdc2372714-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6\" (UID: \"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" Dec 03 00:36:47 crc kubenswrapper[4805]: E1203 00:36:47.946542 4805 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 03 00:36:47 crc kubenswrapper[4805]: E1203 00:36:47.946849 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df2a6e96-9e6c-4b4c-be6c-7ecdc2372714-default-cloud1-ceil-meter-proxy-tls podName:df2a6e96-9e6c-4b4c-be6c-7ecdc2372714 nodeName:}" failed. No retries permitted until 2025-12-03 00:36:48.946821699 +0000 UTC m=+1832.795784305 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/df2a6e96-9e6c-4b4c-be6c-7ecdc2372714-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" (UID: "df2a6e96-9e6c-4b4c-be6c-7ecdc2372714") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 03 00:36:48 crc kubenswrapper[4805]: I1203 00:36:48.965952 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/df2a6e96-9e6c-4b4c-be6c-7ecdc2372714-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6\" (UID: \"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" Dec 03 00:36:48 crc kubenswrapper[4805]: I1203 00:36:48.971098 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/df2a6e96-9e6c-4b4c-be6c-7ecdc2372714-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6\" (UID: \"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" Dec 03 00:36:49 crc kubenswrapper[4805]: I1203 00:36:49.084371 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" Dec 03 00:36:52 crc kubenswrapper[4805]: I1203 00:36:52.112323 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw"] Dec 03 00:36:52 crc kubenswrapper[4805]: I1203 00:36:52.114348 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" Dec 03 00:36:52 crc kubenswrapper[4805]: I1203 00:36:52.120274 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Dec 03 00:36:52 crc kubenswrapper[4805]: I1203 00:36:52.120553 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Dec 03 00:36:52 crc kubenswrapper[4805]: I1203 00:36:52.145316 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw"] Dec 03 00:36:52 crc kubenswrapper[4805]: I1203 00:36:52.216222 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5e520d91-fcbb-42a4-8955-55b847760c58-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-726lw\" (UID: \"5e520d91-fcbb-42a4-8955-55b847760c58\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" Dec 03 00:36:52 crc kubenswrapper[4805]: I1203 00:36:52.216282 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5e520d91-fcbb-42a4-8955-55b847760c58-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-726lw\" (UID: \"5e520d91-fcbb-42a4-8955-55b847760c58\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" Dec 03 00:36:52 crc kubenswrapper[4805]: I1203 00:36:52.216304 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5e520d91-fcbb-42a4-8955-55b847760c58-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-726lw\" (UID: \"5e520d91-fcbb-42a4-8955-55b847760c58\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" Dec 03 00:36:52 crc kubenswrapper[4805]: I1203 00:36:52.216330 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5e520d91-fcbb-42a4-8955-55b847760c58-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-726lw\" (UID: \"5e520d91-fcbb-42a4-8955-55b847760c58\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" Dec 03 00:36:52 crc kubenswrapper[4805]: I1203 00:36:52.216420 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdgd9\" (UniqueName: \"kubernetes.io/projected/5e520d91-fcbb-42a4-8955-55b847760c58-kube-api-access-xdgd9\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-726lw\" (UID: \"5e520d91-fcbb-42a4-8955-55b847760c58\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" Dec 03 00:36:52 crc kubenswrapper[4805]: I1203 00:36:52.317502 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdgd9\" (UniqueName: \"kubernetes.io/projected/5e520d91-fcbb-42a4-8955-55b847760c58-kube-api-access-xdgd9\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-726lw\" (UID: \"5e520d91-fcbb-42a4-8955-55b847760c58\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" Dec 03 00:36:52 crc kubenswrapper[4805]: I1203 00:36:52.317574 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5e520d91-fcbb-42a4-8955-55b847760c58-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-726lw\" (UID: \"5e520d91-fcbb-42a4-8955-55b847760c58\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" Dec 03 00:36:52 crc kubenswrapper[4805]: I1203 00:36:52.317600 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5e520d91-fcbb-42a4-8955-55b847760c58-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-726lw\" (UID: \"5e520d91-fcbb-42a4-8955-55b847760c58\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" Dec 03 00:36:52 crc kubenswrapper[4805]: I1203 00:36:52.317623 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5e520d91-fcbb-42a4-8955-55b847760c58-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-726lw\" (UID: \"5e520d91-fcbb-42a4-8955-55b847760c58\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" Dec 03 00:36:52 crc kubenswrapper[4805]: I1203 00:36:52.317649 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5e520d91-fcbb-42a4-8955-55b847760c58-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-726lw\" (UID: \"5e520d91-fcbb-42a4-8955-55b847760c58\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" Dec 03 00:36:52 crc kubenswrapper[4805]: E1203 00:36:52.318036 4805 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 03 00:36:52 crc kubenswrapper[4805]: E1203 00:36:52.318113 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e520d91-fcbb-42a4-8955-55b847760c58-default-cloud1-sens-meter-proxy-tls podName:5e520d91-fcbb-42a4-8955-55b847760c58 nodeName:}" failed. No retries permitted until 2025-12-03 00:36:52.818085369 +0000 UTC m=+1836.667047965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/5e520d91-fcbb-42a4-8955-55b847760c58-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" (UID: "5e520d91-fcbb-42a4-8955-55b847760c58") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 03 00:36:52 crc kubenswrapper[4805]: I1203 00:36:52.318887 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5e520d91-fcbb-42a4-8955-55b847760c58-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-726lw\" (UID: \"5e520d91-fcbb-42a4-8955-55b847760c58\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" Dec 03 00:36:52 crc kubenswrapper[4805]: I1203 00:36:52.318922 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5e520d91-fcbb-42a4-8955-55b847760c58-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-726lw\" (UID: \"5e520d91-fcbb-42a4-8955-55b847760c58\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" Dec 03 00:36:52 crc kubenswrapper[4805]: I1203 00:36:52.331915 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5e520d91-fcbb-42a4-8955-55b847760c58-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-726lw\" (UID: \"5e520d91-fcbb-42a4-8955-55b847760c58\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" Dec 03 00:36:52 crc kubenswrapper[4805]: I1203 00:36:52.334862 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdgd9\" (UniqueName: \"kubernetes.io/projected/5e520d91-fcbb-42a4-8955-55b847760c58-kube-api-access-xdgd9\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-726lw\" (UID: \"5e520d91-fcbb-42a4-8955-55b847760c58\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" Dec 03 00:36:52 crc kubenswrapper[4805]: I1203 00:36:52.825537 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5e520d91-fcbb-42a4-8955-55b847760c58-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-726lw\" (UID: \"5e520d91-fcbb-42a4-8955-55b847760c58\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" Dec 03 00:36:52 crc kubenswrapper[4805]: E1203 00:36:52.825694 4805 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 03 00:36:52 crc kubenswrapper[4805]: E1203 00:36:52.825771 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e520d91-fcbb-42a4-8955-55b847760c58-default-cloud1-sens-meter-proxy-tls podName:5e520d91-fcbb-42a4-8955-55b847760c58 nodeName:}" failed. No retries permitted until 2025-12-03 00:36:53.825748725 +0000 UTC m=+1837.674711331 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/5e520d91-fcbb-42a4-8955-55b847760c58-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" (UID: "5e520d91-fcbb-42a4-8955-55b847760c58") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 03 00:36:53 crc kubenswrapper[4805]: I1203 00:36:53.840481 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5e520d91-fcbb-42a4-8955-55b847760c58-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-726lw\" (UID: \"5e520d91-fcbb-42a4-8955-55b847760c58\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" Dec 03 00:36:53 crc kubenswrapper[4805]: E1203 00:36:53.840826 4805 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 03 00:36:53 crc kubenswrapper[4805]: E1203 00:36:53.841952 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e520d91-fcbb-42a4-8955-55b847760c58-default-cloud1-sens-meter-proxy-tls podName:5e520d91-fcbb-42a4-8955-55b847760c58 nodeName:}" failed. No retries permitted until 2025-12-03 00:36:55.841922049 +0000 UTC m=+1839.690884645 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/5e520d91-fcbb-42a4-8955-55b847760c58-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" (UID: "5e520d91-fcbb-42a4-8955-55b847760c58") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 03 00:36:55 crc kubenswrapper[4805]: I1203 00:36:55.896979 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5e520d91-fcbb-42a4-8955-55b847760c58-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-726lw\" (UID: \"5e520d91-fcbb-42a4-8955-55b847760c58\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" Dec 03 00:36:55 crc kubenswrapper[4805]: I1203 00:36:55.906701 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5e520d91-fcbb-42a4-8955-55b847760c58-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-726lw\" (UID: \"5e520d91-fcbb-42a4-8955-55b847760c58\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" Dec 03 00:36:56 crc kubenswrapper[4805]: I1203 00:36:56.035605 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" Dec 03 00:36:56 crc kubenswrapper[4805]: I1203 00:36:56.427882 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:36:56 crc kubenswrapper[4805]: E1203 00:36:56.428144 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:36:58 crc kubenswrapper[4805]: E1203 00:36:58.436699 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="quay.io/openshift/origin-oauth-proxy:latest" Dec 03 00:36:58 crc kubenswrapper[4805]: E1203 00:36:58.437605 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:oauth-proxy,Image:quay.io/openshift/origin-oauth-proxy:latest,Command:[],Args:[-https-address=:9092 -tls-cert=/etc/tls/private/tls.crt -tls-key=/etc/tls/private/tls.key -upstream=http://localhost:9090/ -cookie-secret-file=/etc/proxy/secrets/session_secret -openshift-service-account=prometheus-stf -openshift-sar={\"namespace\":\"service-telemetry\",\"resource\": \"prometheuses\", \"resourceAPIGroup\":\"monitoring.rhobs\", \"verb\":\"get\"} -openshift-delegate-urls={\"/\":{\"namespace\":\"service-telemetry\",\"resource\": \"prometheuses\", \"group\":\"monitoring.rhobs\", \"verb\":\"get\"}}],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:9092,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:secret-default-prometheus-proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secret-default-session-secret,ReadOnly:false,MountPath:/etc/proxy/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5l8x6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-default-0_service-telemetry(0deccfcc-828c-479b-a0dd-a42fa9146444): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 00:36:58 crc kubenswrapper[4805]: E1203 00:36:58.439011 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="service-telemetry/prometheus-default-0" podUID="0deccfcc-828c-479b-a0dd-a42fa9146444" Dec 03 00:36:58 crc kubenswrapper[4805]: E1203 00:36:58.545021 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift/origin-oauth-proxy:latest\\\"\"" pod="service-telemetry/prometheus-default-0" podUID="0deccfcc-828c-479b-a0dd-a42fa9146444" Dec 03 00:36:58 crc kubenswrapper[4805]: I1203 00:36:58.956256 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26"] Dec 03 00:36:59 crc kubenswrapper[4805]: I1203 00:36:59.022492 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw"] Dec 03 00:36:59 crc kubenswrapper[4805]: I1203 00:36:59.030835 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6"] Dec 03 00:36:59 crc kubenswrapper[4805]: I1203 00:36:59.559132 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" event={"ID":"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b","Type":"ContainerStarted","Data":"00765b429db781f376cba17eda608e2304035f4544a5740ce91d4a0f6f25b34f"} Dec 03 00:36:59 crc kubenswrapper[4805]: I1203 00:36:59.561978 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" event={"ID":"5e520d91-fcbb-42a4-8955-55b847760c58","Type":"ContainerStarted","Data":"9adc96d8fa60d6d3f4be82073266420b364cb6e6185e3350d2f7e0fc7d54f4b7"} Dec 03 00:36:59 crc kubenswrapper[4805]: I1203 00:36:59.563529 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" event={"ID":"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714","Type":"ContainerStarted","Data":"4bb1eb4403e9c1902a3eddd883d1e5de3a6a5003ff09da263abf99e658d14d21"} Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.289138 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98"] Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.302913 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98"] Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.303132 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.308663 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.311713 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.406730 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/aa845b5b-3900-44d4-8b25-e72584cea960-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-66b65b946f-pbx98\" (UID: \"aa845b5b-3900-44d4-8b25-e72584cea960\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.406777 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/aa845b5b-3900-44d4-8b25-e72584cea960-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-66b65b946f-pbx98\" (UID: \"aa845b5b-3900-44d4-8b25-e72584cea960\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.406819 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/aa845b5b-3900-44d4-8b25-e72584cea960-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-66b65b946f-pbx98\" (UID: \"aa845b5b-3900-44d4-8b25-e72584cea960\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.406848 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjp97\" (UniqueName: \"kubernetes.io/projected/aa845b5b-3900-44d4-8b25-e72584cea960-kube-api-access-qjp97\") pod \"default-cloud1-coll-event-smartgateway-66b65b946f-pbx98\" (UID: \"aa845b5b-3900-44d4-8b25-e72584cea960\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.508375 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/aa845b5b-3900-44d4-8b25-e72584cea960-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-66b65b946f-pbx98\" (UID: \"aa845b5b-3900-44d4-8b25-e72584cea960\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.508444 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjp97\" (UniqueName: \"kubernetes.io/projected/aa845b5b-3900-44d4-8b25-e72584cea960-kube-api-access-qjp97\") pod \"default-cloud1-coll-event-smartgateway-66b65b946f-pbx98\" (UID: \"aa845b5b-3900-44d4-8b25-e72584cea960\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.508570 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/aa845b5b-3900-44d4-8b25-e72584cea960-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-66b65b946f-pbx98\" (UID: \"aa845b5b-3900-44d4-8b25-e72584cea960\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.508603 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/aa845b5b-3900-44d4-8b25-e72584cea960-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-66b65b946f-pbx98\" (UID: \"aa845b5b-3900-44d4-8b25-e72584cea960\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.509984 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/aa845b5b-3900-44d4-8b25-e72584cea960-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-66b65b946f-pbx98\" (UID: \"aa845b5b-3900-44d4-8b25-e72584cea960\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.510514 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/aa845b5b-3900-44d4-8b25-e72584cea960-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-66b65b946f-pbx98\" (UID: \"aa845b5b-3900-44d4-8b25-e72584cea960\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.519789 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/aa845b5b-3900-44d4-8b25-e72584cea960-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-66b65b946f-pbx98\" (UID: \"aa845b5b-3900-44d4-8b25-e72584cea960\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.534898 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64"] Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.536549 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.539452 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.544132 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjp97\" (UniqueName: \"kubernetes.io/projected/aa845b5b-3900-44d4-8b25-e72584cea960-kube-api-access-qjp97\") pod \"default-cloud1-coll-event-smartgateway-66b65b946f-pbx98\" (UID: \"aa845b5b-3900-44d4-8b25-e72584cea960\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.562552 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64"] Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.594294 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" event={"ID":"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714","Type":"ContainerStarted","Data":"0325e2b306f011d195f44d573c252a4ec0dfb8aa80e8f25e644bae20ce9c6f3d"} Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.603140 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"44d5a53a-114f-4261-a53e-630fc66811fc","Type":"ContainerStarted","Data":"75c9a167332b497842a64c67153a248835359fc1e54e940009dbc6ab3fd59840"} Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.605293 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" event={"ID":"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b","Type":"ContainerStarted","Data":"187286f0fcf7e6ed2fafe7d9a0d159906eae3385c3d89ce3ab2c264a1c6874b2"} Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.610148 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" event={"ID":"5e520d91-fcbb-42a4-8955-55b847760c58","Type":"ContainerStarted","Data":"09a65b7c37ab9690a8269811040229519a04e1d7f82d7c0ad7cef6e469443995"} Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.625642 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.710514 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9aa2a224-ceaa-4107-af3e-99d9d72fc6f3-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64\" (UID: \"9aa2a224-ceaa-4107-af3e-99d9d72fc6f3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.710578 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9aa2a224-ceaa-4107-af3e-99d9d72fc6f3-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64\" (UID: \"9aa2a224-ceaa-4107-af3e-99d9d72fc6f3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.710624 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqzsc\" (UniqueName: \"kubernetes.io/projected/9aa2a224-ceaa-4107-af3e-99d9d72fc6f3-kube-api-access-sqzsc\") pod \"default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64\" (UID: \"9aa2a224-ceaa-4107-af3e-99d9d72fc6f3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.710657 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/9aa2a224-ceaa-4107-af3e-99d9d72fc6f3-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64\" (UID: \"9aa2a224-ceaa-4107-af3e-99d9d72fc6f3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.813034 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9aa2a224-ceaa-4107-af3e-99d9d72fc6f3-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64\" (UID: \"9aa2a224-ceaa-4107-af3e-99d9d72fc6f3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.813419 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqzsc\" (UniqueName: \"kubernetes.io/projected/9aa2a224-ceaa-4107-af3e-99d9d72fc6f3-kube-api-access-sqzsc\") pod \"default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64\" (UID: \"9aa2a224-ceaa-4107-af3e-99d9d72fc6f3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.813454 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/9aa2a224-ceaa-4107-af3e-99d9d72fc6f3-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64\" (UID: \"9aa2a224-ceaa-4107-af3e-99d9d72fc6f3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.813501 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9aa2a224-ceaa-4107-af3e-99d9d72fc6f3-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64\" (UID: \"9aa2a224-ceaa-4107-af3e-99d9d72fc6f3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.814115 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9aa2a224-ceaa-4107-af3e-99d9d72fc6f3-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64\" (UID: \"9aa2a224-ceaa-4107-af3e-99d9d72fc6f3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.814382 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9aa2a224-ceaa-4107-af3e-99d9d72fc6f3-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64\" (UID: \"9aa2a224-ceaa-4107-af3e-99d9d72fc6f3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.827044 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/9aa2a224-ceaa-4107-af3e-99d9d72fc6f3-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64\" (UID: \"9aa2a224-ceaa-4107-af3e-99d9d72fc6f3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.831900 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqzsc\" (UniqueName: \"kubernetes.io/projected/9aa2a224-ceaa-4107-af3e-99d9d72fc6f3-kube-api-access-sqzsc\") pod \"default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64\" (UID: \"9aa2a224-ceaa-4107-af3e-99d9d72fc6f3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" Dec 03 00:37:01 crc kubenswrapper[4805]: I1203 00:37:01.898256 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" Dec 03 00:37:02 crc kubenswrapper[4805]: I1203 00:37:02.109324 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98"] Dec 03 00:37:02 crc kubenswrapper[4805]: I1203 00:37:02.342320 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64"] Dec 03 00:37:02 crc kubenswrapper[4805]: I1203 00:37:02.626933 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" event={"ID":"aa845b5b-3900-44d4-8b25-e72584cea960","Type":"ContainerStarted","Data":"8d23c43eace1ca5fbb8163c2a135c9c41c232c007979cf8f09aa65156dbc514c"} Dec 03 00:37:02 crc kubenswrapper[4805]: I1203 00:37:02.631465 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" event={"ID":"9aa2a224-ceaa-4107-af3e-99d9d72fc6f3","Type":"ContainerStarted","Data":"7ed4229895c74185432ecfeae85b7f24651fc122cd9e208a68edae169af9de79"} Dec 03 00:37:02 crc kubenswrapper[4805]: I1203 00:37:02.918481 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Dec 03 00:37:02 crc kubenswrapper[4805]: I1203 00:37:02.918554 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Dec 03 00:37:02 crc kubenswrapper[4805]: E1203 00:37:02.923940 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift/origin-oauth-proxy:latest\\\"\"" pod="service-telemetry/prometheus-default-0" podUID="0deccfcc-828c-479b-a0dd-a42fa9146444" Dec 03 00:37:02 crc kubenswrapper[4805]: I1203 00:37:02.992054 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Dec 03 00:37:03 crc kubenswrapper[4805]: I1203 00:37:03.646587 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"44d5a53a-114f-4261-a53e-630fc66811fc","Type":"ContainerStarted","Data":"9c32beb480c72d17327886fd61c5997d27ff7706dc61c697e3f36a34778ea66e"} Dec 03 00:37:03 crc kubenswrapper[4805]: E1203 00:37:03.650331 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift/origin-oauth-proxy:latest\\\"\"" pod="service-telemetry/prometheus-default-0" podUID="0deccfcc-828c-479b-a0dd-a42fa9146444" Dec 03 00:37:03 crc kubenswrapper[4805]: I1203 00:37:03.726570 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Dec 03 00:37:04 crc kubenswrapper[4805]: I1203 00:37:04.661667 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"44d5a53a-114f-4261-a53e-630fc66811fc","Type":"ContainerStarted","Data":"8326cb71f0de90f2979d7e110fc675de5cf5d4f145a91624d7eb179ce42e5d8b"} Dec 03 00:37:06 crc kubenswrapper[4805]: I1203 00:37:06.447114 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=20.695550702 podStartE2EDuration="37.44708374s" podCreationTimestamp="2025-12-03 00:36:29 +0000 UTC" firstStartedPulling="2025-12-03 00:36:46.454893015 +0000 UTC m=+1830.303855621" lastFinishedPulling="2025-12-03 00:37:03.206426053 +0000 UTC m=+1847.055388659" observedRunningTime="2025-12-03 00:37:04.698639813 +0000 UTC m=+1848.547602439" watchObservedRunningTime="2025-12-03 00:37:06.44708374 +0000 UTC m=+1850.296046356" Dec 03 00:37:07 crc kubenswrapper[4805]: E1203 00:37:07.624392 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift/origin-oauth-proxy:latest\\\"\"" pod="service-telemetry/prometheus-default-0" podUID="0deccfcc-828c-479b-a0dd-a42fa9146444" Dec 03 00:37:08 crc kubenswrapper[4805]: I1203 00:37:08.704723 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" event={"ID":"9aa2a224-ceaa-4107-af3e-99d9d72fc6f3","Type":"ContainerStarted","Data":"263b0f6d7e96a8dba91f7ff79ac20ef7b0c02983ab5aa5fe47e20e52f682ac3c"} Dec 03 00:37:08 crc kubenswrapper[4805]: I1203 00:37:08.708538 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" event={"ID":"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b","Type":"ContainerStarted","Data":"1aec223666ce19fabe7e884ba122d036d6dca355bbb2b183d4114f9097e53305"} Dec 03 00:37:08 crc kubenswrapper[4805]: I1203 00:37:08.725168 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" event={"ID":"5e520d91-fcbb-42a4-8955-55b847760c58","Type":"ContainerStarted","Data":"77e6f2409d6dab8e0f2afb9306149719429f48af021c48859ade68221a3c13d2"} Dec 03 00:37:08 crc kubenswrapper[4805]: I1203 00:37:08.727601 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" event={"ID":"aa845b5b-3900-44d4-8b25-e72584cea960","Type":"ContainerStarted","Data":"bb67061eac73b6df97dad7315e919c855a73c476020911086da5569adfb5d812"} Dec 03 00:37:08 crc kubenswrapper[4805]: I1203 00:37:08.732870 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" event={"ID":"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714","Type":"ContainerStarted","Data":"f8bf722d65269877e8f91f28c6df2da6108135b980c0194a67aaebf0b7acb207"} Dec 03 00:37:11 crc kubenswrapper[4805]: I1203 00:37:11.423894 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:37:11 crc kubenswrapper[4805]: E1203 00:37:11.424642 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:37:15 crc kubenswrapper[4805]: I1203 00:37:15.666694 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-zh2dz"] Dec 03 00:37:15 crc kubenswrapper[4805]: I1203 00:37:15.667441 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" podUID="91a28bb2-1342-478f-8a0f-77aec3120166" containerName="default-interconnect" containerID="cri-o://6ef6789b1c0468f712ceea0872f89afadf281943a396893694b884476a65042c" gracePeriod=30 Dec 03 00:37:15 crc kubenswrapper[4805]: I1203 00:37:15.800479 4805 generic.go:334] "Generic (PLEG): container finished" podID="91a28bb2-1342-478f-8a0f-77aec3120166" containerID="6ef6789b1c0468f712ceea0872f89afadf281943a396893694b884476a65042c" exitCode=0 Dec 03 00:37:15 crc kubenswrapper[4805]: I1203 00:37:15.800540 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" event={"ID":"91a28bb2-1342-478f-8a0f-77aec3120166","Type":"ContainerDied","Data":"6ef6789b1c0468f712ceea0872f89afadf281943a396893694b884476a65042c"} Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.545994 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.716286 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-openstack-ca\") pod \"91a28bb2-1342-478f-8a0f-77aec3120166\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.717975 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmrmz\" (UniqueName: \"kubernetes.io/projected/91a28bb2-1342-478f-8a0f-77aec3120166-kube-api-access-wmrmz\") pod \"91a28bb2-1342-478f-8a0f-77aec3120166\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.718088 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/91a28bb2-1342-478f-8a0f-77aec3120166-sasl-config\") pod \"91a28bb2-1342-478f-8a0f-77aec3120166\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.718170 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-inter-router-credentials\") pod \"91a28bb2-1342-478f-8a0f-77aec3120166\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.718306 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-sasl-users\") pod \"91a28bb2-1342-478f-8a0f-77aec3120166\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.718647 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-inter-router-ca\") pod \"91a28bb2-1342-478f-8a0f-77aec3120166\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.718760 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-openstack-credentials\") pod \"91a28bb2-1342-478f-8a0f-77aec3120166\" (UID: \"91a28bb2-1342-478f-8a0f-77aec3120166\") " Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.720606 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91a28bb2-1342-478f-8a0f-77aec3120166-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "91a28bb2-1342-478f-8a0f-77aec3120166" (UID: "91a28bb2-1342-478f-8a0f-77aec3120166"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.725486 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "91a28bb2-1342-478f-8a0f-77aec3120166" (UID: "91a28bb2-1342-478f-8a0f-77aec3120166"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.725496 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91a28bb2-1342-478f-8a0f-77aec3120166-kube-api-access-wmrmz" (OuterVolumeSpecName: "kube-api-access-wmrmz") pod "91a28bb2-1342-478f-8a0f-77aec3120166" (UID: "91a28bb2-1342-478f-8a0f-77aec3120166"). InnerVolumeSpecName "kube-api-access-wmrmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.725658 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "91a28bb2-1342-478f-8a0f-77aec3120166" (UID: "91a28bb2-1342-478f-8a0f-77aec3120166"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.726827 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "91a28bb2-1342-478f-8a0f-77aec3120166" (UID: "91a28bb2-1342-478f-8a0f-77aec3120166"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.728682 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "91a28bb2-1342-478f-8a0f-77aec3120166" (UID: "91a28bb2-1342-478f-8a0f-77aec3120166"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.728931 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "91a28bb2-1342-478f-8a0f-77aec3120166" (UID: "91a28bb2-1342-478f-8a0f-77aec3120166"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.814123 4805 generic.go:334] "Generic (PLEG): container finished" podID="aa845b5b-3900-44d4-8b25-e72584cea960" containerID="bb67061eac73b6df97dad7315e919c855a73c476020911086da5569adfb5d812" exitCode=0 Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.814254 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" event={"ID":"aa845b5b-3900-44d4-8b25-e72584cea960","Type":"ContainerDied","Data":"bb67061eac73b6df97dad7315e919c855a73c476020911086da5569adfb5d812"} Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.814301 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" event={"ID":"aa845b5b-3900-44d4-8b25-e72584cea960","Type":"ContainerStarted","Data":"e673ff15eb67a9dd14c502eefd58eb4d00c325b2f5d9e949a4e93063703a3ccf"} Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.815137 4805 scope.go:117] "RemoveContainer" containerID="bb67061eac73b6df97dad7315e919c855a73c476020911086da5569adfb5d812" Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.817439 4805 generic.go:334] "Generic (PLEG): container finished" podID="9aa2a224-ceaa-4107-af3e-99d9d72fc6f3" containerID="263b0f6d7e96a8dba91f7ff79ac20ef7b0c02983ab5aa5fe47e20e52f682ac3c" exitCode=0 Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.817502 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" event={"ID":"9aa2a224-ceaa-4107-af3e-99d9d72fc6f3","Type":"ContainerDied","Data":"263b0f6d7e96a8dba91f7ff79ac20ef7b0c02983ab5aa5fe47e20e52f682ac3c"} Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.817530 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" event={"ID":"9aa2a224-ceaa-4107-af3e-99d9d72fc6f3","Type":"ContainerStarted","Data":"fb603fb4a196b6813dbca3f7084c6640094d0b65256eb34ec7ab7cb9e0ff22ea"} Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.818668 4805 scope.go:117] "RemoveContainer" containerID="263b0f6d7e96a8dba91f7ff79ac20ef7b0c02983ab5aa5fe47e20e52f682ac3c" Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.820453 4805 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.820488 4805 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.820502 4805 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.820515 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmrmz\" (UniqueName: \"kubernetes.io/projected/91a28bb2-1342-478f-8a0f-77aec3120166-kube-api-access-wmrmz\") on node \"crc\" DevicePath \"\"" Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.820527 4805 reconciler_common.go:293] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/91a28bb2-1342-478f-8a0f-77aec3120166-sasl-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.820537 4805 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.820549 4805 reconciler_common.go:293] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/91a28bb2-1342-478f-8a0f-77aec3120166-sasl-users\") on node \"crc\" DevicePath \"\"" Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.821293 4805 generic.go:334] "Generic (PLEG): container finished" podID="2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b" containerID="1aec223666ce19fabe7e884ba122d036d6dca355bbb2b183d4114f9097e53305" exitCode=0 Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.821438 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" event={"ID":"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b","Type":"ContainerDied","Data":"1aec223666ce19fabe7e884ba122d036d6dca355bbb2b183d4114f9097e53305"} Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.821499 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" event={"ID":"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b","Type":"ContainerStarted","Data":"cf5e8a3769d6c73072d3d5193ab9c456a3d1c43b3e3adfcb31820579d52f4161"} Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.821893 4805 scope.go:117] "RemoveContainer" containerID="1aec223666ce19fabe7e884ba122d036d6dca355bbb2b183d4114f9097e53305" Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.826047 4805 generic.go:334] "Generic (PLEG): container finished" podID="5e520d91-fcbb-42a4-8955-55b847760c58" containerID="77e6f2409d6dab8e0f2afb9306149719429f48af021c48859ade68221a3c13d2" exitCode=0 Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.826121 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" event={"ID":"5e520d91-fcbb-42a4-8955-55b847760c58","Type":"ContainerDied","Data":"77e6f2409d6dab8e0f2afb9306149719429f48af021c48859ade68221a3c13d2"} Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.826155 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" event={"ID":"5e520d91-fcbb-42a4-8955-55b847760c58","Type":"ContainerStarted","Data":"468364817a8214f7cc54217d845e866082aa88ba607d67ebc8fee22dc845cae3"} Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.826674 4805 scope.go:117] "RemoveContainer" containerID="77e6f2409d6dab8e0f2afb9306149719429f48af021c48859ade68221a3c13d2" Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.833045 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.833065 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-zh2dz" event={"ID":"91a28bb2-1342-478f-8a0f-77aec3120166","Type":"ContainerDied","Data":"5f413199aaccc811c3eb6db131d4a8f4c21c2a83605e8b1b262b8f08d8a6fcbb"} Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.833112 4805 scope.go:117] "RemoveContainer" containerID="6ef6789b1c0468f712ceea0872f89afadf281943a396893694b884476a65042c" Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.848508 4805 generic.go:334] "Generic (PLEG): container finished" podID="df2a6e96-9e6c-4b4c-be6c-7ecdc2372714" containerID="f8bf722d65269877e8f91f28c6df2da6108135b980c0194a67aaebf0b7acb207" exitCode=0 Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.848571 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" event={"ID":"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714","Type":"ContainerDied","Data":"f8bf722d65269877e8f91f28c6df2da6108135b980c0194a67aaebf0b7acb207"} Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.848609 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" event={"ID":"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714","Type":"ContainerStarted","Data":"aacc2be73a7d9bd8da7469d2b54ebac4974d5426367178977f321c6d1977c638"} Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.849109 4805 scope.go:117] "RemoveContainer" containerID="f8bf722d65269877e8f91f28c6df2da6108135b980c0194a67aaebf0b7acb207" Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.963720 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-zh2dz"] Dec 03 00:37:16 crc kubenswrapper[4805]: I1203 00:37:16.982653 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-zh2dz"] Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.350862 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-72xhl"] Dec 03 00:37:17 crc kubenswrapper[4805]: E1203 00:37:17.351545 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a28bb2-1342-478f-8a0f-77aec3120166" containerName="default-interconnect" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.351731 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a28bb2-1342-478f-8a0f-77aec3120166" containerName="default-interconnect" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.351966 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a28bb2-1342-478f-8a0f-77aec3120166" containerName="default-interconnect" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.352676 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.362791 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.362917 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.363018 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.363562 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-2vx6q" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.363700 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.365516 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.365543 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.371008 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-72xhl"] Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.431258 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8-sasl-users\") pod \"default-interconnect-68864d46cb-72xhl\" (UID: \"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8\") " pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.431349 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-72xhl\" (UID: \"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8\") " pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.431379 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-72xhl\" (UID: \"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8\") " pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.431515 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8-sasl-config\") pod \"default-interconnect-68864d46cb-72xhl\" (UID: \"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8\") " pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.431590 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-72xhl\" (UID: \"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8\") " pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.431686 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-72xhl\" (UID: \"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8\") " pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.431764 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6mv7\" (UniqueName: \"kubernetes.io/projected/4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8-kube-api-access-g6mv7\") pod \"default-interconnect-68864d46cb-72xhl\" (UID: \"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8\") " pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.533929 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-72xhl\" (UID: \"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8\") " pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.534025 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6mv7\" (UniqueName: \"kubernetes.io/projected/4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8-kube-api-access-g6mv7\") pod \"default-interconnect-68864d46cb-72xhl\" (UID: \"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8\") " pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.534091 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8-sasl-users\") pod \"default-interconnect-68864d46cb-72xhl\" (UID: \"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8\") " pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.534182 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-72xhl\" (UID: \"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8\") " pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.534244 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-72xhl\" (UID: \"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8\") " pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.534286 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8-sasl-config\") pod \"default-interconnect-68864d46cb-72xhl\" (UID: \"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8\") " pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.534334 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-72xhl\" (UID: \"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8\") " pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.535882 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8-sasl-config\") pod \"default-interconnect-68864d46cb-72xhl\" (UID: \"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8\") " pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.542237 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-72xhl\" (UID: \"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8\") " pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.542330 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-72xhl\" (UID: \"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8\") " pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.549632 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8-sasl-users\") pod \"default-interconnect-68864d46cb-72xhl\" (UID: \"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8\") " pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.553156 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-72xhl\" (UID: \"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8\") " pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.557548 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-72xhl\" (UID: \"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8\") " pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.558323 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6mv7\" (UniqueName: \"kubernetes.io/projected/4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8-kube-api-access-g6mv7\") pod \"default-interconnect-68864d46cb-72xhl\" (UID: \"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8\") " pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.669073 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-72xhl" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.861810 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" event={"ID":"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714","Type":"ContainerStarted","Data":"2f9899f750f1eca01fd670262bd26d5a03be8614031e9694d5c4f56f61abc57f"} Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.866684 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" event={"ID":"aa845b5b-3900-44d4-8b25-e72584cea960","Type":"ContainerStarted","Data":"5f08a2e160dcabe448cdfc55b8dae873985f31f90264f5b0487e641a794ef09b"} Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.871687 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" event={"ID":"9aa2a224-ceaa-4107-af3e-99d9d72fc6f3","Type":"ContainerStarted","Data":"da03d837928da2ef8766b52d541b74139dcd522e4ed073728a20f51fb3d2d62a"} Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.880825 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" event={"ID":"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b","Type":"ContainerStarted","Data":"0c4712775bfab1ded3b6b29b4eb9bf7e28cb578baa875462568e695838d8646d"} Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.897745 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" event={"ID":"5e520d91-fcbb-42a4-8955-55b847760c58","Type":"ContainerStarted","Data":"350ee3f63f5a585b2d1f66cdcb51f92768586e364e0ff9ad0ec027699dacfcf9"} Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.905748 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" podStartSLOduration=12.939384423 podStartE2EDuration="30.905718918s" podCreationTimestamp="2025-12-03 00:36:47 +0000 UTC" firstStartedPulling="2025-12-03 00:36:59.03809102 +0000 UTC m=+1842.887053626" lastFinishedPulling="2025-12-03 00:37:17.004425515 +0000 UTC m=+1860.853388121" observedRunningTime="2025-12-03 00:37:17.884589133 +0000 UTC m=+1861.733551759" watchObservedRunningTime="2025-12-03 00:37:17.905718918 +0000 UTC m=+1861.754681534" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.924355 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" podStartSLOduration=15.896062318 podStartE2EDuration="33.924326402s" podCreationTimestamp="2025-12-03 00:36:44 +0000 UTC" firstStartedPulling="2025-12-03 00:36:58.965463629 +0000 UTC m=+1842.814426235" lastFinishedPulling="2025-12-03 00:37:16.993727713 +0000 UTC m=+1860.842690319" observedRunningTime="2025-12-03 00:37:17.912869962 +0000 UTC m=+1861.761832578" watchObservedRunningTime="2025-12-03 00:37:17.924326402 +0000 UTC m=+1861.773289018" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.941684 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" podStartSLOduration=2.3610501729999998 podStartE2EDuration="16.941655054s" podCreationTimestamp="2025-12-03 00:37:01 +0000 UTC" firstStartedPulling="2025-12-03 00:37:02.368386032 +0000 UTC m=+1846.217348638" lastFinishedPulling="2025-12-03 00:37:16.948990913 +0000 UTC m=+1860.797953519" observedRunningTime="2025-12-03 00:37:17.938740863 +0000 UTC m=+1861.787703469" watchObservedRunningTime="2025-12-03 00:37:17.941655054 +0000 UTC m=+1861.790617680" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.957191 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" podStartSLOduration=2.050765798 podStartE2EDuration="16.957168082s" podCreationTimestamp="2025-12-03 00:37:01 +0000 UTC" firstStartedPulling="2025-12-03 00:37:02.126011923 +0000 UTC m=+1845.974974529" lastFinishedPulling="2025-12-03 00:37:17.032414197 +0000 UTC m=+1860.881376813" observedRunningTime="2025-12-03 00:37:17.953946824 +0000 UTC m=+1861.802909440" watchObservedRunningTime="2025-12-03 00:37:17.957168082 +0000 UTC m=+1861.806130688" Dec 03 00:37:17 crc kubenswrapper[4805]: I1203 00:37:17.981389 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" podStartSLOduration=7.985975249 podStartE2EDuration="25.981366282s" podCreationTimestamp="2025-12-03 00:36:52 +0000 UTC" firstStartedPulling="2025-12-03 00:36:59.036339457 +0000 UTC m=+1842.885302063" lastFinishedPulling="2025-12-03 00:37:17.03173049 +0000 UTC m=+1860.880693096" observedRunningTime="2025-12-03 00:37:17.979515317 +0000 UTC m=+1861.828477933" watchObservedRunningTime="2025-12-03 00:37:17.981366282 +0000 UTC m=+1861.830328888" Dec 03 00:37:18 crc kubenswrapper[4805]: I1203 00:37:18.124505 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-72xhl"] Dec 03 00:37:18 crc kubenswrapper[4805]: W1203 00:37:18.125908 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e3eb4bc_f157_45dd_8b15_dbcd89d7b6f8.slice/crio-0cc784275636279e322fe35177a3e9c09b9547d094ddbdf23b4d673f4fcaa447 WatchSource:0}: Error finding container 0cc784275636279e322fe35177a3e9c09b9547d094ddbdf23b4d673f4fcaa447: Status 404 returned error can't find the container with id 0cc784275636279e322fe35177a3e9c09b9547d094ddbdf23b4d673f4fcaa447 Dec 03 00:37:18 crc kubenswrapper[4805]: I1203 00:37:18.441184 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91a28bb2-1342-478f-8a0f-77aec3120166" path="/var/lib/kubelet/pods/91a28bb2-1342-478f-8a0f-77aec3120166/volumes" Dec 03 00:37:18 crc kubenswrapper[4805]: I1203 00:37:18.913335 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-72xhl" event={"ID":"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8","Type":"ContainerStarted","Data":"84f828eca29da0ec6ffe255758fd0a11c3de6dd6da25e5768127fc54d2e18939"} Dec 03 00:37:18 crc kubenswrapper[4805]: I1203 00:37:18.913388 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-72xhl" event={"ID":"4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8","Type":"ContainerStarted","Data":"0cc784275636279e322fe35177a3e9c09b9547d094ddbdf23b4d673f4fcaa447"} Dec 03 00:37:18 crc kubenswrapper[4805]: I1203 00:37:18.917429 4805 generic.go:334] "Generic (PLEG): container finished" podID="2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b" containerID="0c4712775bfab1ded3b6b29b4eb9bf7e28cb578baa875462568e695838d8646d" exitCode=0 Dec 03 00:37:18 crc kubenswrapper[4805]: I1203 00:37:18.917477 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" event={"ID":"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b","Type":"ContainerDied","Data":"0c4712775bfab1ded3b6b29b4eb9bf7e28cb578baa875462568e695838d8646d"} Dec 03 00:37:18 crc kubenswrapper[4805]: I1203 00:37:18.917505 4805 scope.go:117] "RemoveContainer" containerID="1aec223666ce19fabe7e884ba122d036d6dca355bbb2b183d4114f9097e53305" Dec 03 00:37:18 crc kubenswrapper[4805]: I1203 00:37:18.917874 4805 scope.go:117] "RemoveContainer" containerID="0c4712775bfab1ded3b6b29b4eb9bf7e28cb578baa875462568e695838d8646d" Dec 03 00:37:18 crc kubenswrapper[4805]: E1203 00:37:18.918048 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26_service-telemetry(2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" podUID="2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b" Dec 03 00:37:18 crc kubenswrapper[4805]: I1203 00:37:18.924521 4805 generic.go:334] "Generic (PLEG): container finished" podID="5e520d91-fcbb-42a4-8955-55b847760c58" containerID="350ee3f63f5a585b2d1f66cdcb51f92768586e364e0ff9ad0ec027699dacfcf9" exitCode=0 Dec 03 00:37:18 crc kubenswrapper[4805]: I1203 00:37:18.924570 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" event={"ID":"5e520d91-fcbb-42a4-8955-55b847760c58","Type":"ContainerDied","Data":"350ee3f63f5a585b2d1f66cdcb51f92768586e364e0ff9ad0ec027699dacfcf9"} Dec 03 00:37:18 crc kubenswrapper[4805]: I1203 00:37:18.924932 4805 scope.go:117] "RemoveContainer" containerID="350ee3f63f5a585b2d1f66cdcb51f92768586e364e0ff9ad0ec027699dacfcf9" Dec 03 00:37:18 crc kubenswrapper[4805]: E1203 00:37:18.925119 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-5759b4d97-726lw_service-telemetry(5e520d91-fcbb-42a4-8955-55b847760c58)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" podUID="5e520d91-fcbb-42a4-8955-55b847760c58" Dec 03 00:37:18 crc kubenswrapper[4805]: I1203 00:37:18.930692 4805 generic.go:334] "Generic (PLEG): container finished" podID="df2a6e96-9e6c-4b4c-be6c-7ecdc2372714" containerID="2f9899f750f1eca01fd670262bd26d5a03be8614031e9694d5c4f56f61abc57f" exitCode=0 Dec 03 00:37:18 crc kubenswrapper[4805]: I1203 00:37:18.930743 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" event={"ID":"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714","Type":"ContainerDied","Data":"2f9899f750f1eca01fd670262bd26d5a03be8614031e9694d5c4f56f61abc57f"} Dec 03 00:37:18 crc kubenswrapper[4805]: I1203 00:37:18.931016 4805 scope.go:117] "RemoveContainer" containerID="2f9899f750f1eca01fd670262bd26d5a03be8614031e9694d5c4f56f61abc57f" Dec 03 00:37:18 crc kubenswrapper[4805]: E1203 00:37:18.931163 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6_service-telemetry(df2a6e96-9e6c-4b4c-be6c-7ecdc2372714)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" podUID="df2a6e96-9e6c-4b4c-be6c-7ecdc2372714" Dec 03 00:37:18 crc kubenswrapper[4805]: I1203 00:37:18.935762 4805 generic.go:334] "Generic (PLEG): container finished" podID="aa845b5b-3900-44d4-8b25-e72584cea960" containerID="5f08a2e160dcabe448cdfc55b8dae873985f31f90264f5b0487e641a794ef09b" exitCode=0 Dec 03 00:37:18 crc kubenswrapper[4805]: I1203 00:37:18.935818 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" event={"ID":"aa845b5b-3900-44d4-8b25-e72584cea960","Type":"ContainerDied","Data":"5f08a2e160dcabe448cdfc55b8dae873985f31f90264f5b0487e641a794ef09b"} Dec 03 00:37:18 crc kubenswrapper[4805]: I1203 00:37:18.936236 4805 scope.go:117] "RemoveContainer" containerID="5f08a2e160dcabe448cdfc55b8dae873985f31f90264f5b0487e641a794ef09b" Dec 03 00:37:18 crc kubenswrapper[4805]: E1203 00:37:18.936421 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-66b65b946f-pbx98_service-telemetry(aa845b5b-3900-44d4-8b25-e72584cea960)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" podUID="aa845b5b-3900-44d4-8b25-e72584cea960" Dec 03 00:37:18 crc kubenswrapper[4805]: I1203 00:37:18.948511 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-72xhl" podStartSLOduration=3.94848003 podStartE2EDuration="3.94848003s" podCreationTimestamp="2025-12-03 00:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 00:37:18.942957745 +0000 UTC m=+1862.791920381" watchObservedRunningTime="2025-12-03 00:37:18.94848003 +0000 UTC m=+1862.797442646" Dec 03 00:37:18 crc kubenswrapper[4805]: I1203 00:37:18.960161 4805 generic.go:334] "Generic (PLEG): container finished" podID="9aa2a224-ceaa-4107-af3e-99d9d72fc6f3" containerID="da03d837928da2ef8766b52d541b74139dcd522e4ed073728a20f51fb3d2d62a" exitCode=0 Dec 03 00:37:18 crc kubenswrapper[4805]: I1203 00:37:18.960239 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" event={"ID":"9aa2a224-ceaa-4107-af3e-99d9d72fc6f3","Type":"ContainerDied","Data":"da03d837928da2ef8766b52d541b74139dcd522e4ed073728a20f51fb3d2d62a"} Dec 03 00:37:18 crc kubenswrapper[4805]: I1203 00:37:18.960598 4805 scope.go:117] "RemoveContainer" containerID="da03d837928da2ef8766b52d541b74139dcd522e4ed073728a20f51fb3d2d62a" Dec 03 00:37:18 crc kubenswrapper[4805]: E1203 00:37:18.960805 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64_service-telemetry(9aa2a224-ceaa-4107-af3e-99d9d72fc6f3)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" podUID="9aa2a224-ceaa-4107-af3e-99d9d72fc6f3" Dec 03 00:37:18 crc kubenswrapper[4805]: I1203 00:37:18.988521 4805 scope.go:117] "RemoveContainer" containerID="77e6f2409d6dab8e0f2afb9306149719429f48af021c48859ade68221a3c13d2" Dec 03 00:37:19 crc kubenswrapper[4805]: I1203 00:37:19.311580 4805 scope.go:117] "RemoveContainer" containerID="f8bf722d65269877e8f91f28c6df2da6108135b980c0194a67aaebf0b7acb207" Dec 03 00:37:19 crc kubenswrapper[4805]: I1203 00:37:19.347615 4805 scope.go:117] "RemoveContainer" containerID="bb67061eac73b6df97dad7315e919c855a73c476020911086da5569adfb5d812" Dec 03 00:37:19 crc kubenswrapper[4805]: I1203 00:37:19.406689 4805 scope.go:117] "RemoveContainer" containerID="263b0f6d7e96a8dba91f7ff79ac20ef7b0c02983ab5aa5fe47e20e52f682ac3c" Dec 03 00:37:19 crc kubenswrapper[4805]: I1203 00:37:19.971717 4805 scope.go:117] "RemoveContainer" containerID="2f9899f750f1eca01fd670262bd26d5a03be8614031e9694d5c4f56f61abc57f" Dec 03 00:37:19 crc kubenswrapper[4805]: E1203 00:37:19.971925 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6_service-telemetry(df2a6e96-9e6c-4b4c-be6c-7ecdc2372714)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" podUID="df2a6e96-9e6c-4b4c-be6c-7ecdc2372714" Dec 03 00:37:19 crc kubenswrapper[4805]: I1203 00:37:19.976713 4805 scope.go:117] "RemoveContainer" containerID="5f08a2e160dcabe448cdfc55b8dae873985f31f90264f5b0487e641a794ef09b" Dec 03 00:37:19 crc kubenswrapper[4805]: E1203 00:37:19.977004 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-66b65b946f-pbx98_service-telemetry(aa845b5b-3900-44d4-8b25-e72584cea960)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" podUID="aa845b5b-3900-44d4-8b25-e72584cea960" Dec 03 00:37:19 crc kubenswrapper[4805]: I1203 00:37:19.978227 4805 scope.go:117] "RemoveContainer" containerID="da03d837928da2ef8766b52d541b74139dcd522e4ed073728a20f51fb3d2d62a" Dec 03 00:37:19 crc kubenswrapper[4805]: E1203 00:37:19.978461 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64_service-telemetry(9aa2a224-ceaa-4107-af3e-99d9d72fc6f3)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" podUID="9aa2a224-ceaa-4107-af3e-99d9d72fc6f3" Dec 03 00:37:19 crc kubenswrapper[4805]: I1203 00:37:19.980779 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0deccfcc-828c-479b-a0dd-a42fa9146444","Type":"ContainerStarted","Data":"a14373f78d2a083f73ed2c931df259dc24bcfbb7138dfaeb1164cbacaf53006b"} Dec 03 00:37:19 crc kubenswrapper[4805]: I1203 00:37:19.983818 4805 scope.go:117] "RemoveContainer" containerID="0c4712775bfab1ded3b6b29b4eb9bf7e28cb578baa875462568e695838d8646d" Dec 03 00:37:19 crc kubenswrapper[4805]: E1203 00:37:19.984432 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26_service-telemetry(2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" podUID="2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b" Dec 03 00:37:19 crc kubenswrapper[4805]: I1203 00:37:19.985786 4805 scope.go:117] "RemoveContainer" containerID="350ee3f63f5a585b2d1f66cdcb51f92768586e364e0ff9ad0ec027699dacfcf9" Dec 03 00:37:19 crc kubenswrapper[4805]: E1203 00:37:19.985975 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-5759b4d97-726lw_service-telemetry(5e520d91-fcbb-42a4-8955-55b847760c58)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" podUID="5e520d91-fcbb-42a4-8955-55b847760c58" Dec 03 00:37:20 crc kubenswrapper[4805]: I1203 00:37:20.113104 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.000247497 podStartE2EDuration="1m5.113078583s" podCreationTimestamp="2025-12-03 00:36:15 +0000 UTC" firstStartedPulling="2025-12-03 00:36:18.198972292 +0000 UTC m=+1802.047934898" lastFinishedPulling="2025-12-03 00:37:19.311803378 +0000 UTC m=+1863.160765984" observedRunningTime="2025-12-03 00:37:20.09083076 +0000 UTC m=+1863.939793376" watchObservedRunningTime="2025-12-03 00:37:20.113078583 +0000 UTC m=+1863.962041189" Dec 03 00:37:23 crc kubenswrapper[4805]: I1203 00:37:23.424518 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:37:23 crc kubenswrapper[4805]: E1203 00:37:23.425142 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:37:32 crc kubenswrapper[4805]: I1203 00:37:32.423364 4805 scope.go:117] "RemoveContainer" containerID="2f9899f750f1eca01fd670262bd26d5a03be8614031e9694d5c4f56f61abc57f" Dec 03 00:37:32 crc kubenswrapper[4805]: I1203 00:37:32.424301 4805 scope.go:117] "RemoveContainer" containerID="0c4712775bfab1ded3b6b29b4eb9bf7e28cb578baa875462568e695838d8646d" Dec 03 00:37:32 crc kubenswrapper[4805]: I1203 00:37:32.428067 4805 scope.go:117] "RemoveContainer" containerID="da03d837928da2ef8766b52d541b74139dcd522e4ed073728a20f51fb3d2d62a" Dec 03 00:37:33 crc kubenswrapper[4805]: I1203 00:37:33.101830 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26" event={"ID":"2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b","Type":"ContainerStarted","Data":"8ecb9829d2ed6bc7d944f61d601fae02ea003e4f5d49839bf038ac7b723d0eb1"} Dec 03 00:37:33 crc kubenswrapper[4805]: I1203 00:37:33.106890 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6" event={"ID":"df2a6e96-9e6c-4b4c-be6c-7ecdc2372714","Type":"ContainerStarted","Data":"286e5f2372eb109fce317cc9d4efc88228bce4bdb0217ef1c859d82a44d3c6a7"} Dec 03 00:37:33 crc kubenswrapper[4805]: I1203 00:37:33.110610 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64" event={"ID":"9aa2a224-ceaa-4107-af3e-99d9d72fc6f3","Type":"ContainerStarted","Data":"dc63ac02f9180d08a5fbdc93de496e96d843baea6fe3b5c2fd7ec62449a8a311"} Dec 03 00:37:34 crc kubenswrapper[4805]: I1203 00:37:34.423544 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:37:34 crc kubenswrapper[4805]: E1203 00:37:34.425148 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:37:35 crc kubenswrapper[4805]: I1203 00:37:35.424164 4805 scope.go:117] "RemoveContainer" containerID="5f08a2e160dcabe448cdfc55b8dae873985f31f90264f5b0487e641a794ef09b" Dec 03 00:37:35 crc kubenswrapper[4805]: I1203 00:37:35.424332 4805 scope.go:117] "RemoveContainer" containerID="350ee3f63f5a585b2d1f66cdcb51f92768586e364e0ff9ad0ec027699dacfcf9" Dec 03 00:37:36 crc kubenswrapper[4805]: I1203 00:37:36.144109 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-66b65b946f-pbx98" event={"ID":"aa845b5b-3900-44d4-8b25-e72584cea960","Type":"ContainerStarted","Data":"7ad3aebbbd267be387b3ab9bc03b5b42f581923fe996267975dbd356f73049f3"} Dec 03 00:37:36 crc kubenswrapper[4805]: I1203 00:37:36.148226 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-726lw" event={"ID":"5e520d91-fcbb-42a4-8955-55b847760c58","Type":"ContainerStarted","Data":"920e7ed8e961b02d8d2bc4da6e8df8d6af2a164a4fb3d56cb769700ff76cf5e1"} Dec 03 00:37:46 crc kubenswrapper[4805]: I1203 00:37:46.454648 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Dec 03 00:37:46 crc kubenswrapper[4805]: I1203 00:37:46.459648 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 03 00:37:46 crc kubenswrapper[4805]: I1203 00:37:46.463379 4805 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Dec 03 00:37:46 crc kubenswrapper[4805]: I1203 00:37:46.463665 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Dec 03 00:37:46 crc kubenswrapper[4805]: I1203 00:37:46.475142 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 03 00:37:46 crc kubenswrapper[4805]: I1203 00:37:46.533802 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2v4k\" (UniqueName: \"kubernetes.io/projected/24364326-cee4-4408-9351-8cd967dc5aef-kube-api-access-s2v4k\") pod \"qdr-test\" (UID: \"24364326-cee4-4408-9351-8cd967dc5aef\") " pod="service-telemetry/qdr-test" Dec 03 00:37:46 crc kubenswrapper[4805]: I1203 00:37:46.534299 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/24364326-cee4-4408-9351-8cd967dc5aef-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"24364326-cee4-4408-9351-8cd967dc5aef\") " pod="service-telemetry/qdr-test" Dec 03 00:37:46 crc kubenswrapper[4805]: I1203 00:37:46.534628 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/24364326-cee4-4408-9351-8cd967dc5aef-qdr-test-config\") pod \"qdr-test\" (UID: \"24364326-cee4-4408-9351-8cd967dc5aef\") " pod="service-telemetry/qdr-test" Dec 03 00:37:46 crc kubenswrapper[4805]: I1203 00:37:46.636240 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/24364326-cee4-4408-9351-8cd967dc5aef-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"24364326-cee4-4408-9351-8cd967dc5aef\") " pod="service-telemetry/qdr-test" Dec 03 00:37:46 crc kubenswrapper[4805]: I1203 00:37:46.636413 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/24364326-cee4-4408-9351-8cd967dc5aef-qdr-test-config\") pod \"qdr-test\" (UID: \"24364326-cee4-4408-9351-8cd967dc5aef\") " pod="service-telemetry/qdr-test" Dec 03 00:37:46 crc kubenswrapper[4805]: I1203 00:37:46.636460 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2v4k\" (UniqueName: \"kubernetes.io/projected/24364326-cee4-4408-9351-8cd967dc5aef-kube-api-access-s2v4k\") pod \"qdr-test\" (UID: \"24364326-cee4-4408-9351-8cd967dc5aef\") " pod="service-telemetry/qdr-test" Dec 03 00:37:46 crc kubenswrapper[4805]: I1203 00:37:46.637228 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/24364326-cee4-4408-9351-8cd967dc5aef-qdr-test-config\") pod \"qdr-test\" (UID: \"24364326-cee4-4408-9351-8cd967dc5aef\") " pod="service-telemetry/qdr-test" Dec 03 00:37:46 crc kubenswrapper[4805]: I1203 00:37:46.642933 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/24364326-cee4-4408-9351-8cd967dc5aef-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"24364326-cee4-4408-9351-8cd967dc5aef\") " pod="service-telemetry/qdr-test" Dec 03 00:37:46 crc kubenswrapper[4805]: I1203 00:37:46.658943 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2v4k\" (UniqueName: \"kubernetes.io/projected/24364326-cee4-4408-9351-8cd967dc5aef-kube-api-access-s2v4k\") pod \"qdr-test\" (UID: \"24364326-cee4-4408-9351-8cd967dc5aef\") " pod="service-telemetry/qdr-test" Dec 03 00:37:46 crc kubenswrapper[4805]: I1203 00:37:46.798606 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 03 00:37:47 crc kubenswrapper[4805]: I1203 00:37:47.296937 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 03 00:37:47 crc kubenswrapper[4805]: I1203 00:37:47.424075 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:37:47 crc kubenswrapper[4805]: E1203 00:37:47.424618 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:37:48 crc kubenswrapper[4805]: I1203 00:37:48.275177 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"24364326-cee4-4408-9351-8cd967dc5aef","Type":"ContainerStarted","Data":"0cd6b451e6f312d6e78cf17ff3dbec274d25f4e8fa1d1174c1cc3b39cf16a2b7"} Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.377123 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"24364326-cee4-4408-9351-8cd967dc5aef","Type":"ContainerStarted","Data":"84fd45cb6a61e86b4776a847b740ade420af7b39baadd9fcb81c7fd2ec15294f"} Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.398022 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=1.565852265 podStartE2EDuration="10.397993281s" podCreationTimestamp="2025-12-03 00:37:46 +0000 UTC" firstStartedPulling="2025-12-03 00:37:47.304238553 +0000 UTC m=+1891.153201209" lastFinishedPulling="2025-12-03 00:37:56.136379619 +0000 UTC m=+1899.985342225" observedRunningTime="2025-12-03 00:37:56.393018392 +0000 UTC m=+1900.241981018" watchObservedRunningTime="2025-12-03 00:37:56.397993281 +0000 UTC m=+1900.246955907" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.799944 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-7k7gm"] Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.801626 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.804548 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.804627 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.805136 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.805719 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.806034 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.806132 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.825189 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-7k7gm"] Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.866587 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-7k7gm\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.866669 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-7k7gm\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.866898 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-collectd-config\") pod \"stf-smoketest-smoke1-7k7gm\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.866978 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52gx9\" (UniqueName: \"kubernetes.io/projected/02a20748-c3ea-4c77-aad9-c586408cee98-kube-api-access-52gx9\") pod \"stf-smoketest-smoke1-7k7gm\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.867083 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-ceilometer-publisher\") pod \"stf-smoketest-smoke1-7k7gm\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.867130 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-sensubility-config\") pod \"stf-smoketest-smoke1-7k7gm\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.867160 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-healthcheck-log\") pod \"stf-smoketest-smoke1-7k7gm\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.969343 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-7k7gm\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.969420 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-7k7gm\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.969460 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-collectd-config\") pod \"stf-smoketest-smoke1-7k7gm\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.969494 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52gx9\" (UniqueName: \"kubernetes.io/projected/02a20748-c3ea-4c77-aad9-c586408cee98-kube-api-access-52gx9\") pod \"stf-smoketest-smoke1-7k7gm\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.969524 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-ceilometer-publisher\") pod \"stf-smoketest-smoke1-7k7gm\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.969547 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-sensubility-config\") pod \"stf-smoketest-smoke1-7k7gm\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.969570 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-healthcheck-log\") pod \"stf-smoketest-smoke1-7k7gm\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.970716 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-7k7gm\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.970717 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-7k7gm\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.971041 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-sensubility-config\") pod \"stf-smoketest-smoke1-7k7gm\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.971076 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-collectd-config\") pod \"stf-smoketest-smoke1-7k7gm\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.971456 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-healthcheck-log\") pod \"stf-smoketest-smoke1-7k7gm\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.971588 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-ceilometer-publisher\") pod \"stf-smoketest-smoke1-7k7gm\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:56 crc kubenswrapper[4805]: I1203 00:37:56.995698 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52gx9\" (UniqueName: \"kubernetes.io/projected/02a20748-c3ea-4c77-aad9-c586408cee98-kube-api-access-52gx9\") pod \"stf-smoketest-smoke1-7k7gm\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:57 crc kubenswrapper[4805]: I1203 00:37:57.118635 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:37:57 crc kubenswrapper[4805]: I1203 00:37:57.315941 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Dec 03 00:37:57 crc kubenswrapper[4805]: I1203 00:37:57.319186 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 03 00:37:57 crc kubenswrapper[4805]: I1203 00:37:57.354212 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 03 00:37:57 crc kubenswrapper[4805]: I1203 00:37:57.378301 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbrz6\" (UniqueName: \"kubernetes.io/projected/a8ef2518-d251-4427-aa8e-8676d6ca8513-kube-api-access-pbrz6\") pod \"curl\" (UID: \"a8ef2518-d251-4427-aa8e-8676d6ca8513\") " pod="service-telemetry/curl" Dec 03 00:37:57 crc kubenswrapper[4805]: I1203 00:37:57.482357 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbrz6\" (UniqueName: \"kubernetes.io/projected/a8ef2518-d251-4427-aa8e-8676d6ca8513-kube-api-access-pbrz6\") pod \"curl\" (UID: \"a8ef2518-d251-4427-aa8e-8676d6ca8513\") " pod="service-telemetry/curl" Dec 03 00:37:57 crc kubenswrapper[4805]: I1203 00:37:57.508939 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbrz6\" (UniqueName: \"kubernetes.io/projected/a8ef2518-d251-4427-aa8e-8676d6ca8513-kube-api-access-pbrz6\") pod \"curl\" (UID: \"a8ef2518-d251-4427-aa8e-8676d6ca8513\") " pod="service-telemetry/curl" Dec 03 00:37:57 crc kubenswrapper[4805]: I1203 00:37:57.608245 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-7k7gm"] Dec 03 00:37:57 crc kubenswrapper[4805]: W1203 00:37:57.619416 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02a20748_c3ea_4c77_aad9_c586408cee98.slice/crio-08f8d93059397eb5bcfac53e293ac3c209a716650890fa1b6b9ee9004c08931b WatchSource:0}: Error finding container 08f8d93059397eb5bcfac53e293ac3c209a716650890fa1b6b9ee9004c08931b: Status 404 returned error can't find the container with id 08f8d93059397eb5bcfac53e293ac3c209a716650890fa1b6b9ee9004c08931b Dec 03 00:37:57 crc kubenswrapper[4805]: I1203 00:37:57.673109 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 03 00:37:57 crc kubenswrapper[4805]: I1203 00:37:57.897802 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 03 00:37:57 crc kubenswrapper[4805]: W1203 00:37:57.911888 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8ef2518_d251_4427_aa8e_8676d6ca8513.slice/crio-8261da580b2bccd1c392450bebdb10843bd5f43374fb5198d903d4f077472297 WatchSource:0}: Error finding container 8261da580b2bccd1c392450bebdb10843bd5f43374fb5198d903d4f077472297: Status 404 returned error can't find the container with id 8261da580b2bccd1c392450bebdb10843bd5f43374fb5198d903d4f077472297 Dec 03 00:37:58 crc kubenswrapper[4805]: I1203 00:37:58.398038 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"a8ef2518-d251-4427-aa8e-8676d6ca8513","Type":"ContainerStarted","Data":"8261da580b2bccd1c392450bebdb10843bd5f43374fb5198d903d4f077472297"} Dec 03 00:37:58 crc kubenswrapper[4805]: I1203 00:37:58.399452 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7k7gm" event={"ID":"02a20748-c3ea-4c77-aad9-c586408cee98","Type":"ContainerStarted","Data":"08f8d93059397eb5bcfac53e293ac3c209a716650890fa1b6b9ee9004c08931b"} Dec 03 00:38:01 crc kubenswrapper[4805]: I1203 00:38:01.423260 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:38:01 crc kubenswrapper[4805]: E1203 00:38:01.424078 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:38:10 crc kubenswrapper[4805]: I1203 00:38:10.551627 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7k7gm" event={"ID":"02a20748-c3ea-4c77-aad9-c586408cee98","Type":"ContainerStarted","Data":"75a635246e479e551742a526f49c0ff3f14aee84321e072f54424235fc223c04"} Dec 03 00:38:10 crc kubenswrapper[4805]: I1203 00:38:10.554444 4805 generic.go:334] "Generic (PLEG): container finished" podID="a8ef2518-d251-4427-aa8e-8676d6ca8513" containerID="822c51ce5fc172b4f1e719a0f32855aa6b1b5022d5961964d379a639d27acb67" exitCode=0 Dec 03 00:38:10 crc kubenswrapper[4805]: I1203 00:38:10.554509 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"a8ef2518-d251-4427-aa8e-8676d6ca8513","Type":"ContainerDied","Data":"822c51ce5fc172b4f1e719a0f32855aa6b1b5022d5961964d379a639d27acb67"} Dec 03 00:38:15 crc kubenswrapper[4805]: I1203 00:38:15.423877 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:38:15 crc kubenswrapper[4805]: E1203 00:38:15.424923 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:38:16 crc kubenswrapper[4805]: I1203 00:38:16.336783 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 03 00:38:16 crc kubenswrapper[4805]: I1203 00:38:16.448948 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbrz6\" (UniqueName: \"kubernetes.io/projected/a8ef2518-d251-4427-aa8e-8676d6ca8513-kube-api-access-pbrz6\") pod \"a8ef2518-d251-4427-aa8e-8676d6ca8513\" (UID: \"a8ef2518-d251-4427-aa8e-8676d6ca8513\") " Dec 03 00:38:16 crc kubenswrapper[4805]: I1203 00:38:16.454560 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8ef2518-d251-4427-aa8e-8676d6ca8513-kube-api-access-pbrz6" (OuterVolumeSpecName: "kube-api-access-pbrz6") pod "a8ef2518-d251-4427-aa8e-8676d6ca8513" (UID: "a8ef2518-d251-4427-aa8e-8676d6ca8513"). InnerVolumeSpecName "kube-api-access-pbrz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:38:16 crc kubenswrapper[4805]: I1203 00:38:16.498093 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_a8ef2518-d251-4427-aa8e-8676d6ca8513/curl/0.log" Dec 03 00:38:16 crc kubenswrapper[4805]: I1203 00:38:16.552856 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbrz6\" (UniqueName: \"kubernetes.io/projected/a8ef2518-d251-4427-aa8e-8676d6ca8513-kube-api-access-pbrz6\") on node \"crc\" DevicePath \"\"" Dec 03 00:38:16 crc kubenswrapper[4805]: I1203 00:38:16.618843 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"a8ef2518-d251-4427-aa8e-8676d6ca8513","Type":"ContainerDied","Data":"8261da580b2bccd1c392450bebdb10843bd5f43374fb5198d903d4f077472297"} Dec 03 00:38:16 crc kubenswrapper[4805]: I1203 00:38:16.618917 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8261da580b2bccd1c392450bebdb10843bd5f43374fb5198d903d4f077472297" Dec 03 00:38:16 crc kubenswrapper[4805]: I1203 00:38:16.618930 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 03 00:38:16 crc kubenswrapper[4805]: I1203 00:38:16.628700 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7k7gm" event={"ID":"02a20748-c3ea-4c77-aad9-c586408cee98","Type":"ContainerStarted","Data":"b6a1c0a9d903bcd8504053b0ef65d4f68d10674908fadbd6b449822d2e233806"} Dec 03 00:38:16 crc kubenswrapper[4805]: I1203 00:38:16.652609 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-7k7gm" podStartSLOduration=1.926217756 podStartE2EDuration="20.652578877s" podCreationTimestamp="2025-12-03 00:37:56 +0000 UTC" firstStartedPulling="2025-12-03 00:37:57.621518142 +0000 UTC m=+1901.470480758" lastFinishedPulling="2025-12-03 00:38:16.347879273 +0000 UTC m=+1920.196841879" observedRunningTime="2025-12-03 00:38:16.647897633 +0000 UTC m=+1920.496860239" watchObservedRunningTime="2025-12-03 00:38:16.652578877 +0000 UTC m=+1920.501541483" Dec 03 00:38:16 crc kubenswrapper[4805]: I1203 00:38:16.779074 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-vgprn_5cf29272-b2e7-4917-a402-d27cd7918749/prometheus-webhook-snmp/0.log" Dec 03 00:38:27 crc kubenswrapper[4805]: I1203 00:38:27.423868 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:38:27 crc kubenswrapper[4805]: E1203 00:38:27.424847 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:38:35 crc kubenswrapper[4805]: I1203 00:38:35.366342 4805 scope.go:117] "RemoveContainer" containerID="4987839dcf54ccd09914c177bbdad95e70be0e54a47164101edff1a4fa30f2ad" Dec 03 00:38:35 crc kubenswrapper[4805]: I1203 00:38:35.410569 4805 scope.go:117] "RemoveContainer" containerID="5da6bbe383954c119ef39f60cd9874465ac32b7c4bc9e2b313306d6635743253" Dec 03 00:38:38 crc kubenswrapper[4805]: I1203 00:38:38.423381 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:38:38 crc kubenswrapper[4805]: E1203 00:38:38.424032 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:38:43 crc kubenswrapper[4805]: I1203 00:38:43.853702 4805 generic.go:334] "Generic (PLEG): container finished" podID="02a20748-c3ea-4c77-aad9-c586408cee98" containerID="75a635246e479e551742a526f49c0ff3f14aee84321e072f54424235fc223c04" exitCode=0 Dec 03 00:38:43 crc kubenswrapper[4805]: I1203 00:38:43.854325 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7k7gm" event={"ID":"02a20748-c3ea-4c77-aad9-c586408cee98","Type":"ContainerDied","Data":"75a635246e479e551742a526f49c0ff3f14aee84321e072f54424235fc223c04"} Dec 03 00:38:43 crc kubenswrapper[4805]: I1203 00:38:43.856441 4805 scope.go:117] "RemoveContainer" containerID="75a635246e479e551742a526f49c0ff3f14aee84321e072f54424235fc223c04" Dec 03 00:38:46 crc kubenswrapper[4805]: I1203 00:38:46.934145 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-vgprn_5cf29272-b2e7-4917-a402-d27cd7918749/prometheus-webhook-snmp/0.log" Dec 03 00:38:48 crc kubenswrapper[4805]: I1203 00:38:48.910881 4805 generic.go:334] "Generic (PLEG): container finished" podID="02a20748-c3ea-4c77-aad9-c586408cee98" containerID="b6a1c0a9d903bcd8504053b0ef65d4f68d10674908fadbd6b449822d2e233806" exitCode=0 Dec 03 00:38:48 crc kubenswrapper[4805]: I1203 00:38:48.910966 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7k7gm" event={"ID":"02a20748-c3ea-4c77-aad9-c586408cee98","Type":"ContainerDied","Data":"b6a1c0a9d903bcd8504053b0ef65d4f68d10674908fadbd6b449822d2e233806"} Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.266346 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.422893 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52gx9\" (UniqueName: \"kubernetes.io/projected/02a20748-c3ea-4c77-aad9-c586408cee98-kube-api-access-52gx9\") pod \"02a20748-c3ea-4c77-aad9-c586408cee98\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.423450 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-sensubility-config\") pod \"02a20748-c3ea-4c77-aad9-c586408cee98\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.423576 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-collectd-entrypoint-script\") pod \"02a20748-c3ea-4c77-aad9-c586408cee98\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.423598 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-ceilometer-publisher\") pod \"02a20748-c3ea-4c77-aad9-c586408cee98\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.423617 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-healthcheck-log\") pod \"02a20748-c3ea-4c77-aad9-c586408cee98\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.423646 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-ceilometer-entrypoint-script\") pod \"02a20748-c3ea-4c77-aad9-c586408cee98\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.423663 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-collectd-config\") pod \"02a20748-c3ea-4c77-aad9-c586408cee98\" (UID: \"02a20748-c3ea-4c77-aad9-c586408cee98\") " Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.430322 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02a20748-c3ea-4c77-aad9-c586408cee98-kube-api-access-52gx9" (OuterVolumeSpecName: "kube-api-access-52gx9") pod "02a20748-c3ea-4c77-aad9-c586408cee98" (UID: "02a20748-c3ea-4c77-aad9-c586408cee98"). InnerVolumeSpecName "kube-api-access-52gx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.450543 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "02a20748-c3ea-4c77-aad9-c586408cee98" (UID: "02a20748-c3ea-4c77-aad9-c586408cee98"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.457520 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "02a20748-c3ea-4c77-aad9-c586408cee98" (UID: "02a20748-c3ea-4c77-aad9-c586408cee98"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.459977 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "02a20748-c3ea-4c77-aad9-c586408cee98" (UID: "02a20748-c3ea-4c77-aad9-c586408cee98"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.461926 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "02a20748-c3ea-4c77-aad9-c586408cee98" (UID: "02a20748-c3ea-4c77-aad9-c586408cee98"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.464537 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "02a20748-c3ea-4c77-aad9-c586408cee98" (UID: "02a20748-c3ea-4c77-aad9-c586408cee98"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.470761 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "02a20748-c3ea-4c77-aad9-c586408cee98" (UID: "02a20748-c3ea-4c77-aad9-c586408cee98"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.526048 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52gx9\" (UniqueName: \"kubernetes.io/projected/02a20748-c3ea-4c77-aad9-c586408cee98-kube-api-access-52gx9\") on node \"crc\" DevicePath \"\"" Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.526097 4805 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-sensubility-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.526108 4805 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.526121 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.526130 4805 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-healthcheck-log\") on node \"crc\" DevicePath \"\"" Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.526141 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.526157 4805 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/02a20748-c3ea-4c77-aad9-c586408cee98-collectd-config\") on node \"crc\" DevicePath \"\"" Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.930818 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7k7gm" event={"ID":"02a20748-c3ea-4c77-aad9-c586408cee98","Type":"ContainerDied","Data":"08f8d93059397eb5bcfac53e293ac3c209a716650890fa1b6b9ee9004c08931b"} Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.930909 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08f8d93059397eb5bcfac53e293ac3c209a716650890fa1b6b9ee9004c08931b" Dec 03 00:38:50 crc kubenswrapper[4805]: I1203 00:38:50.931039 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-7k7gm" Dec 03 00:38:52 crc kubenswrapper[4805]: I1203 00:38:52.400138 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-7k7gm_02a20748-c3ea-4c77-aad9-c586408cee98/smoketest-collectd/0.log" Dec 03 00:38:52 crc kubenswrapper[4805]: I1203 00:38:52.424137 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:38:52 crc kubenswrapper[4805]: E1203 00:38:52.424417 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:38:52 crc kubenswrapper[4805]: I1203 00:38:52.709643 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-7k7gm_02a20748-c3ea-4c77-aad9-c586408cee98/smoketest-ceilometer/0.log" Dec 03 00:38:53 crc kubenswrapper[4805]: I1203 00:38:53.017875 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-72xhl_4e3eb4bc-f157-45dd-8b15-dbcd89d7b6f8/default-interconnect/0.log" Dec 03 00:38:53 crc kubenswrapper[4805]: I1203 00:38:53.314744 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26_2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b/bridge/2.log" Dec 03 00:38:53 crc kubenswrapper[4805]: I1203 00:38:53.653866 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-vcg26_2c30beb6-32ea-4343-9bf0-9cdbf27c6a7b/sg-core/0.log" Dec 03 00:38:53 crc kubenswrapper[4805]: I1203 00:38:53.955794 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-66b65b946f-pbx98_aa845b5b-3900-44d4-8b25-e72584cea960/bridge/2.log" Dec 03 00:38:54 crc kubenswrapper[4805]: I1203 00:38:54.267720 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-66b65b946f-pbx98_aa845b5b-3900-44d4-8b25-e72584cea960/sg-core/0.log" Dec 03 00:38:54 crc kubenswrapper[4805]: I1203 00:38:54.596342 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6_df2a6e96-9e6c-4b4c-be6c-7ecdc2372714/bridge/2.log" Dec 03 00:38:54 crc kubenswrapper[4805]: I1203 00:38:54.949968 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-vssz6_df2a6e96-9e6c-4b4c-be6c-7ecdc2372714/sg-core/0.log" Dec 03 00:38:55 crc kubenswrapper[4805]: I1203 00:38:55.278557 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64_9aa2a224-ceaa-4107-af3e-99d9d72fc6f3/bridge/2.log" Dec 03 00:38:55 crc kubenswrapper[4805]: I1203 00:38:55.621672 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-676c54cc7c-kqh64_9aa2a224-ceaa-4107-af3e-99d9d72fc6f3/sg-core/0.log" Dec 03 00:38:56 crc kubenswrapper[4805]: I1203 00:38:56.002182 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-726lw_5e520d91-fcbb-42a4-8955-55b847760c58/bridge/2.log" Dec 03 00:38:56 crc kubenswrapper[4805]: I1203 00:38:56.392268 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-726lw_5e520d91-fcbb-42a4-8955-55b847760c58/sg-core/0.log" Dec 03 00:38:59 crc kubenswrapper[4805]: I1203 00:38:59.970924 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-699b8db96d-czl52_edc06863-bc4d-47f8-a0ba-516d554d4343/operator/0.log" Dec 03 00:39:00 crc kubenswrapper[4805]: I1203 00:39:00.297504 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_0deccfcc-828c-479b-a0dd-a42fa9146444/prometheus/0.log" Dec 03 00:39:00 crc kubenswrapper[4805]: I1203 00:39:00.624766 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_080988eb-4bd7-49e9-8689-6f551aa99555/elasticsearch/0.log" Dec 03 00:39:00 crc kubenswrapper[4805]: I1203 00:39:00.922810 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-vgprn_5cf29272-b2e7-4917-a402-d27cd7918749/prometheus-webhook-snmp/0.log" Dec 03 00:39:01 crc kubenswrapper[4805]: I1203 00:39:01.284512 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_44d5a53a-114f-4261-a53e-630fc66811fc/alertmanager/0.log" Dec 03 00:39:07 crc kubenswrapper[4805]: I1203 00:39:07.423768 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:39:07 crc kubenswrapper[4805]: E1203 00:39:07.424760 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:39:17 crc kubenswrapper[4805]: I1203 00:39:17.564828 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-6c67bcb598-zj4p2_cc3ecee9-27a5-431e-8a35-9d62915b2df7/operator/0.log" Dec 03 00:39:19 crc kubenswrapper[4805]: I1203 00:39:19.422951 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:39:20 crc kubenswrapper[4805]: I1203 00:39:20.197712 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" event={"ID":"42d6da4d-d781-4243-b5c3-28a8cf91ef53","Type":"ContainerStarted","Data":"5a3ac1d228024bb4737ad8fb9c1bcabee904b7ab6322982ce04c396481575c75"} Dec 03 00:39:20 crc kubenswrapper[4805]: I1203 00:39:20.939292 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-699b8db96d-czl52_edc06863-bc4d-47f8-a0ba-516d554d4343/operator/0.log" Dec 03 00:39:21 crc kubenswrapper[4805]: I1203 00:39:21.276119 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_24364326-cee4-4408-9351-8cd967dc5aef/qdr/0.log" Dec 03 00:39:35 crc kubenswrapper[4805]: I1203 00:39:35.499138 4805 scope.go:117] "RemoveContainer" containerID="a0e2e36d4597cdd2784dcfc46cfe1ddc7349fd197f78bfccfdeb8cd75d5c8123" Dec 03 00:39:35 crc kubenswrapper[4805]: I1203 00:39:35.549650 4805 scope.go:117] "RemoveContainer" containerID="4b6f5927a4a2bc0b8d1a0cd39416c2ab577e8f5b0b86d0e6a1790e68fec3a075" Dec 03 00:39:47 crc kubenswrapper[4805]: I1203 00:39:47.253718 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-operators-q28x8"] Dec 03 00:39:47 crc kubenswrapper[4805]: E1203 00:39:47.255015 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a20748-c3ea-4c77-aad9-c586408cee98" containerName="smoketest-collectd" Dec 03 00:39:47 crc kubenswrapper[4805]: I1203 00:39:47.255036 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a20748-c3ea-4c77-aad9-c586408cee98" containerName="smoketest-collectd" Dec 03 00:39:47 crc kubenswrapper[4805]: E1203 00:39:47.255051 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a20748-c3ea-4c77-aad9-c586408cee98" containerName="smoketest-ceilometer" Dec 03 00:39:47 crc kubenswrapper[4805]: I1203 00:39:47.255059 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a20748-c3ea-4c77-aad9-c586408cee98" containerName="smoketest-ceilometer" Dec 03 00:39:47 crc kubenswrapper[4805]: E1203 00:39:47.255069 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8ef2518-d251-4427-aa8e-8676d6ca8513" containerName="curl" Dec 03 00:39:47 crc kubenswrapper[4805]: I1203 00:39:47.255078 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8ef2518-d251-4427-aa8e-8676d6ca8513" containerName="curl" Dec 03 00:39:47 crc kubenswrapper[4805]: I1203 00:39:47.255259 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a20748-c3ea-4c77-aad9-c586408cee98" containerName="smoketest-collectd" Dec 03 00:39:47 crc kubenswrapper[4805]: I1203 00:39:47.255275 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8ef2518-d251-4427-aa8e-8676d6ca8513" containerName="curl" Dec 03 00:39:47 crc kubenswrapper[4805]: I1203 00:39:47.255288 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a20748-c3ea-4c77-aad9-c586408cee98" containerName="smoketest-ceilometer" Dec 03 00:39:47 crc kubenswrapper[4805]: I1203 00:39:47.255969 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-q28x8" Dec 03 00:39:47 crc kubenswrapper[4805]: I1203 00:39:47.273542 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-q28x8"] Dec 03 00:39:47 crc kubenswrapper[4805]: I1203 00:39:47.432000 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9dt2\" (UniqueName: \"kubernetes.io/projected/a610e296-cdb4-45c3-a7a9-fe1656e056b3-kube-api-access-d9dt2\") pod \"service-telemetry-framework-operators-q28x8\" (UID: \"a610e296-cdb4-45c3-a7a9-fe1656e056b3\") " pod="service-telemetry/service-telemetry-framework-operators-q28x8" Dec 03 00:39:47 crc kubenswrapper[4805]: I1203 00:39:47.534074 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9dt2\" (UniqueName: \"kubernetes.io/projected/a610e296-cdb4-45c3-a7a9-fe1656e056b3-kube-api-access-d9dt2\") pod \"service-telemetry-framework-operators-q28x8\" (UID: \"a610e296-cdb4-45c3-a7a9-fe1656e056b3\") " pod="service-telemetry/service-telemetry-framework-operators-q28x8" Dec 03 00:39:47 crc kubenswrapper[4805]: I1203 00:39:47.558981 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9dt2\" (UniqueName: \"kubernetes.io/projected/a610e296-cdb4-45c3-a7a9-fe1656e056b3-kube-api-access-d9dt2\") pod \"service-telemetry-framework-operators-q28x8\" (UID: \"a610e296-cdb4-45c3-a7a9-fe1656e056b3\") " pod="service-telemetry/service-telemetry-framework-operators-q28x8" Dec 03 00:39:47 crc kubenswrapper[4805]: I1203 00:39:47.586116 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-q28x8" Dec 03 00:39:48 crc kubenswrapper[4805]: I1203 00:39:47.868389 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-q28x8"] Dec 03 00:39:48 crc kubenswrapper[4805]: I1203 00:39:48.480532 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-q28x8" event={"ID":"a610e296-cdb4-45c3-a7a9-fe1656e056b3","Type":"ContainerStarted","Data":"8b2cf7782059cacf285a1ccdf46054aa72ef075f3f6c906ac6ed36589d13968d"} Dec 03 00:39:48 crc kubenswrapper[4805]: I1203 00:39:48.481141 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-q28x8" event={"ID":"a610e296-cdb4-45c3-a7a9-fe1656e056b3","Type":"ContainerStarted","Data":"b78063f75e06b16317816bdffd964efcc97f09bc65b0b12bb25292a965ec9574"} Dec 03 00:39:48 crc kubenswrapper[4805]: I1203 00:39:48.507478 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-operators-q28x8" podStartSLOduration=1.378300433 podStartE2EDuration="1.507378487s" podCreationTimestamp="2025-12-03 00:39:47 +0000 UTC" firstStartedPulling="2025-12-03 00:39:47.885697646 +0000 UTC m=+2011.734660272" lastFinishedPulling="2025-12-03 00:39:48.01477572 +0000 UTC m=+2011.863738326" observedRunningTime="2025-12-03 00:39:48.501911375 +0000 UTC m=+2012.350874001" watchObservedRunningTime="2025-12-03 00:39:48.507378487 +0000 UTC m=+2012.356341133" Dec 03 00:39:57 crc kubenswrapper[4805]: I1203 00:39:57.587652 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/service-telemetry-framework-operators-q28x8" Dec 03 00:39:57 crc kubenswrapper[4805]: I1203 00:39:57.588542 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/service-telemetry-framework-operators-q28x8" Dec 03 00:39:57 crc kubenswrapper[4805]: I1203 00:39:57.654799 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/service-telemetry-framework-operators-q28x8" Dec 03 00:39:57 crc kubenswrapper[4805]: I1203 00:39:57.949407 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w5d2j/must-gather-x7bk8"] Dec 03 00:39:57 crc kubenswrapper[4805]: I1203 00:39:57.950969 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w5d2j/must-gather-x7bk8" Dec 03 00:39:57 crc kubenswrapper[4805]: I1203 00:39:57.954804 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-w5d2j"/"kube-root-ca.crt" Dec 03 00:39:57 crc kubenswrapper[4805]: I1203 00:39:57.954896 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-w5d2j"/"openshift-service-ca.crt" Dec 03 00:39:57 crc kubenswrapper[4805]: I1203 00:39:57.983674 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-w5d2j/must-gather-x7bk8"] Dec 03 00:39:58 crc kubenswrapper[4805]: I1203 00:39:58.149345 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/19c834af-5a46-4b5c-8894-828fccef5d1d-must-gather-output\") pod \"must-gather-x7bk8\" (UID: \"19c834af-5a46-4b5c-8894-828fccef5d1d\") " pod="openshift-must-gather-w5d2j/must-gather-x7bk8" Dec 03 00:39:58 crc kubenswrapper[4805]: I1203 00:39:58.149477 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5mgm\" (UniqueName: \"kubernetes.io/projected/19c834af-5a46-4b5c-8894-828fccef5d1d-kube-api-access-v5mgm\") pod \"must-gather-x7bk8\" (UID: \"19c834af-5a46-4b5c-8894-828fccef5d1d\") " pod="openshift-must-gather-w5d2j/must-gather-x7bk8" Dec 03 00:39:58 crc kubenswrapper[4805]: I1203 00:39:58.251215 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5mgm\" (UniqueName: \"kubernetes.io/projected/19c834af-5a46-4b5c-8894-828fccef5d1d-kube-api-access-v5mgm\") pod \"must-gather-x7bk8\" (UID: \"19c834af-5a46-4b5c-8894-828fccef5d1d\") " pod="openshift-must-gather-w5d2j/must-gather-x7bk8" Dec 03 00:39:58 crc kubenswrapper[4805]: I1203 00:39:58.251305 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/19c834af-5a46-4b5c-8894-828fccef5d1d-must-gather-output\") pod \"must-gather-x7bk8\" (UID: \"19c834af-5a46-4b5c-8894-828fccef5d1d\") " pod="openshift-must-gather-w5d2j/must-gather-x7bk8" Dec 03 00:39:58 crc kubenswrapper[4805]: I1203 00:39:58.251800 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/19c834af-5a46-4b5c-8894-828fccef5d1d-must-gather-output\") pod \"must-gather-x7bk8\" (UID: \"19c834af-5a46-4b5c-8894-828fccef5d1d\") " pod="openshift-must-gather-w5d2j/must-gather-x7bk8" Dec 03 00:39:58 crc kubenswrapper[4805]: I1203 00:39:58.273435 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5mgm\" (UniqueName: \"kubernetes.io/projected/19c834af-5a46-4b5c-8894-828fccef5d1d-kube-api-access-v5mgm\") pod \"must-gather-x7bk8\" (UID: \"19c834af-5a46-4b5c-8894-828fccef5d1d\") " pod="openshift-must-gather-w5d2j/must-gather-x7bk8" Dec 03 00:39:58 crc kubenswrapper[4805]: I1203 00:39:58.281406 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w5d2j/must-gather-x7bk8" Dec 03 00:39:58 crc kubenswrapper[4805]: I1203 00:39:58.648517 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/service-telemetry-framework-operators-q28x8" Dec 03 00:39:58 crc kubenswrapper[4805]: I1203 00:39:58.690175 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-w5d2j/must-gather-x7bk8"] Dec 03 00:39:58 crc kubenswrapper[4805]: I1203 00:39:58.724554 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-q28x8"] Dec 03 00:39:59 crc kubenswrapper[4805]: I1203 00:39:59.607455 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w5d2j/must-gather-x7bk8" event={"ID":"19c834af-5a46-4b5c-8894-828fccef5d1d","Type":"ContainerStarted","Data":"eebac2af5b2569adf234610157fe774c8c107ca93c0b548660f4c6ddbaa65fed"} Dec 03 00:40:00 crc kubenswrapper[4805]: I1203 00:40:00.619456 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-operators-q28x8" podUID="a610e296-cdb4-45c3-a7a9-fe1656e056b3" containerName="registry-server" containerID="cri-o://8b2cf7782059cacf285a1ccdf46054aa72ef075f3f6c906ac6ed36589d13968d" gracePeriod=2 Dec 03 00:40:01 crc kubenswrapper[4805]: I1203 00:40:01.516140 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4wbn8"] Dec 03 00:40:01 crc kubenswrapper[4805]: I1203 00:40:01.547268 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4wbn8"] Dec 03 00:40:01 crc kubenswrapper[4805]: I1203 00:40:01.547476 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4wbn8" Dec 03 00:40:01 crc kubenswrapper[4805]: I1203 00:40:01.629751 4805 generic.go:334] "Generic (PLEG): container finished" podID="a610e296-cdb4-45c3-a7a9-fe1656e056b3" containerID="8b2cf7782059cacf285a1ccdf46054aa72ef075f3f6c906ac6ed36589d13968d" exitCode=0 Dec 03 00:40:01 crc kubenswrapper[4805]: I1203 00:40:01.629806 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-q28x8" event={"ID":"a610e296-cdb4-45c3-a7a9-fe1656e056b3","Type":"ContainerDied","Data":"8b2cf7782059cacf285a1ccdf46054aa72ef075f3f6c906ac6ed36589d13968d"} Dec 03 00:40:01 crc kubenswrapper[4805]: I1203 00:40:01.712597 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e0f275-4e72-4f83-88d7-9ba83fadccb1-utilities\") pod \"redhat-operators-4wbn8\" (UID: \"a8e0f275-4e72-4f83-88d7-9ba83fadccb1\") " pod="openshift-marketplace/redhat-operators-4wbn8" Dec 03 00:40:01 crc kubenswrapper[4805]: I1203 00:40:01.712735 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e0f275-4e72-4f83-88d7-9ba83fadccb1-catalog-content\") pod \"redhat-operators-4wbn8\" (UID: \"a8e0f275-4e72-4f83-88d7-9ba83fadccb1\") " pod="openshift-marketplace/redhat-operators-4wbn8" Dec 03 00:40:01 crc kubenswrapper[4805]: I1203 00:40:01.712784 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s62b6\" (UniqueName: \"kubernetes.io/projected/a8e0f275-4e72-4f83-88d7-9ba83fadccb1-kube-api-access-s62b6\") pod \"redhat-operators-4wbn8\" (UID: \"a8e0f275-4e72-4f83-88d7-9ba83fadccb1\") " pod="openshift-marketplace/redhat-operators-4wbn8" Dec 03 00:40:01 crc kubenswrapper[4805]: I1203 00:40:01.814372 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e0f275-4e72-4f83-88d7-9ba83fadccb1-catalog-content\") pod \"redhat-operators-4wbn8\" (UID: \"a8e0f275-4e72-4f83-88d7-9ba83fadccb1\") " pod="openshift-marketplace/redhat-operators-4wbn8" Dec 03 00:40:01 crc kubenswrapper[4805]: I1203 00:40:01.814472 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s62b6\" (UniqueName: \"kubernetes.io/projected/a8e0f275-4e72-4f83-88d7-9ba83fadccb1-kube-api-access-s62b6\") pod \"redhat-operators-4wbn8\" (UID: \"a8e0f275-4e72-4f83-88d7-9ba83fadccb1\") " pod="openshift-marketplace/redhat-operators-4wbn8" Dec 03 00:40:01 crc kubenswrapper[4805]: I1203 00:40:01.814515 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e0f275-4e72-4f83-88d7-9ba83fadccb1-utilities\") pod \"redhat-operators-4wbn8\" (UID: \"a8e0f275-4e72-4f83-88d7-9ba83fadccb1\") " pod="openshift-marketplace/redhat-operators-4wbn8" Dec 03 00:40:01 crc kubenswrapper[4805]: I1203 00:40:01.815068 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e0f275-4e72-4f83-88d7-9ba83fadccb1-utilities\") pod \"redhat-operators-4wbn8\" (UID: \"a8e0f275-4e72-4f83-88d7-9ba83fadccb1\") " pod="openshift-marketplace/redhat-operators-4wbn8" Dec 03 00:40:01 crc kubenswrapper[4805]: I1203 00:40:01.815355 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e0f275-4e72-4f83-88d7-9ba83fadccb1-catalog-content\") pod \"redhat-operators-4wbn8\" (UID: \"a8e0f275-4e72-4f83-88d7-9ba83fadccb1\") " pod="openshift-marketplace/redhat-operators-4wbn8" Dec 03 00:40:01 crc kubenswrapper[4805]: I1203 00:40:01.851631 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s62b6\" (UniqueName: \"kubernetes.io/projected/a8e0f275-4e72-4f83-88d7-9ba83fadccb1-kube-api-access-s62b6\") pod \"redhat-operators-4wbn8\" (UID: \"a8e0f275-4e72-4f83-88d7-9ba83fadccb1\") " pod="openshift-marketplace/redhat-operators-4wbn8" Dec 03 00:40:01 crc kubenswrapper[4805]: I1203 00:40:01.891843 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4wbn8" Dec 03 00:40:07 crc kubenswrapper[4805]: I1203 00:40:07.322600 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-q28x8" Dec 03 00:40:07 crc kubenswrapper[4805]: I1203 00:40:07.397351 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4wbn8"] Dec 03 00:40:07 crc kubenswrapper[4805]: I1203 00:40:07.432295 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9dt2\" (UniqueName: \"kubernetes.io/projected/a610e296-cdb4-45c3-a7a9-fe1656e056b3-kube-api-access-d9dt2\") pod \"a610e296-cdb4-45c3-a7a9-fe1656e056b3\" (UID: \"a610e296-cdb4-45c3-a7a9-fe1656e056b3\") " Dec 03 00:40:07 crc kubenswrapper[4805]: I1203 00:40:07.439328 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a610e296-cdb4-45c3-a7a9-fe1656e056b3-kube-api-access-d9dt2" (OuterVolumeSpecName: "kube-api-access-d9dt2") pod "a610e296-cdb4-45c3-a7a9-fe1656e056b3" (UID: "a610e296-cdb4-45c3-a7a9-fe1656e056b3"). InnerVolumeSpecName "kube-api-access-d9dt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:40:07 crc kubenswrapper[4805]: I1203 00:40:07.534764 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9dt2\" (UniqueName: \"kubernetes.io/projected/a610e296-cdb4-45c3-a7a9-fe1656e056b3-kube-api-access-d9dt2\") on node \"crc\" DevicePath \"\"" Dec 03 00:40:07 crc kubenswrapper[4805]: I1203 00:40:07.689929 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w5d2j/must-gather-x7bk8" event={"ID":"19c834af-5a46-4b5c-8894-828fccef5d1d","Type":"ContainerStarted","Data":"628327dd6e2f9eedff2b99eb13ee58698ad9e5a49e55cfa5b2130af0710d1668"} Dec 03 00:40:07 crc kubenswrapper[4805]: I1203 00:40:07.691801 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-q28x8" Dec 03 00:40:07 crc kubenswrapper[4805]: I1203 00:40:07.692911 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-q28x8" event={"ID":"a610e296-cdb4-45c3-a7a9-fe1656e056b3","Type":"ContainerDied","Data":"b78063f75e06b16317816bdffd964efcc97f09bc65b0b12bb25292a965ec9574"} Dec 03 00:40:07 crc kubenswrapper[4805]: I1203 00:40:07.692991 4805 scope.go:117] "RemoveContainer" containerID="8b2cf7782059cacf285a1ccdf46054aa72ef075f3f6c906ac6ed36589d13968d" Dec 03 00:40:07 crc kubenswrapper[4805]: I1203 00:40:07.695212 4805 generic.go:334] "Generic (PLEG): container finished" podID="a8e0f275-4e72-4f83-88d7-9ba83fadccb1" containerID="1a3fbb6f1a91b5d9284f04b3382c3ece9c20a6f1947b116b8ea9f96fc08ca9be" exitCode=0 Dec 03 00:40:07 crc kubenswrapper[4805]: I1203 00:40:07.695248 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4wbn8" event={"ID":"a8e0f275-4e72-4f83-88d7-9ba83fadccb1","Type":"ContainerDied","Data":"1a3fbb6f1a91b5d9284f04b3382c3ece9c20a6f1947b116b8ea9f96fc08ca9be"} Dec 03 00:40:07 crc kubenswrapper[4805]: I1203 00:40:07.695265 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4wbn8" event={"ID":"a8e0f275-4e72-4f83-88d7-9ba83fadccb1","Type":"ContainerStarted","Data":"bc2b1bd5cc0351c28fb3a966262e3025a2e3d5775d853c8e0c0c8afe5425ec52"} Dec 03 00:40:07 crc kubenswrapper[4805]: I1203 00:40:07.697844 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 00:40:07 crc kubenswrapper[4805]: I1203 00:40:07.752844 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-q28x8"] Dec 03 00:40:07 crc kubenswrapper[4805]: I1203 00:40:07.758639 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-q28x8"] Dec 03 00:40:08 crc kubenswrapper[4805]: I1203 00:40:08.438074 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a610e296-cdb4-45c3-a7a9-fe1656e056b3" path="/var/lib/kubelet/pods/a610e296-cdb4-45c3-a7a9-fe1656e056b3/volumes" Dec 03 00:40:08 crc kubenswrapper[4805]: I1203 00:40:08.721141 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w5d2j/must-gather-x7bk8" event={"ID":"19c834af-5a46-4b5c-8894-828fccef5d1d","Type":"ContainerStarted","Data":"08861393e6d0531ba2bcf7092207c42a6ec011358c7354d410b5958bb5bc83aa"} Dec 03 00:40:08 crc kubenswrapper[4805]: I1203 00:40:08.729797 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4wbn8" event={"ID":"a8e0f275-4e72-4f83-88d7-9ba83fadccb1","Type":"ContainerStarted","Data":"0bcb595489fab797b15b60c1ed3291c828b54629018041000de9a791ad19fe40"} Dec 03 00:40:08 crc kubenswrapper[4805]: I1203 00:40:08.761735 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-w5d2j/must-gather-x7bk8" podStartSLOduration=3.345724417 podStartE2EDuration="11.76171766s" podCreationTimestamp="2025-12-03 00:39:57 +0000 UTC" firstStartedPulling="2025-12-03 00:39:58.694916694 +0000 UTC m=+2022.543879300" lastFinishedPulling="2025-12-03 00:40:07.110909937 +0000 UTC m=+2030.959872543" observedRunningTime="2025-12-03 00:40:08.744978843 +0000 UTC m=+2032.593941449" watchObservedRunningTime="2025-12-03 00:40:08.76171766 +0000 UTC m=+2032.610680266" Dec 03 00:40:09 crc kubenswrapper[4805]: I1203 00:40:09.762042 4805 generic.go:334] "Generic (PLEG): container finished" podID="a8e0f275-4e72-4f83-88d7-9ba83fadccb1" containerID="0bcb595489fab797b15b60c1ed3291c828b54629018041000de9a791ad19fe40" exitCode=0 Dec 03 00:40:09 crc kubenswrapper[4805]: I1203 00:40:09.763004 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4wbn8" event={"ID":"a8e0f275-4e72-4f83-88d7-9ba83fadccb1","Type":"ContainerDied","Data":"0bcb595489fab797b15b60c1ed3291c828b54629018041000de9a791ad19fe40"} Dec 03 00:40:10 crc kubenswrapper[4805]: I1203 00:40:10.775858 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4wbn8" event={"ID":"a8e0f275-4e72-4f83-88d7-9ba83fadccb1","Type":"ContainerStarted","Data":"e3c5073d5f5e9bed6742e03544bca4503c25a004053792b6c1f0641af34fc701"} Dec 03 00:40:10 crc kubenswrapper[4805]: I1203 00:40:10.796831 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4wbn8" podStartSLOduration=7.220300876 podStartE2EDuration="9.796804391s" podCreationTimestamp="2025-12-03 00:40:01 +0000 UTC" firstStartedPulling="2025-12-03 00:40:07.697620169 +0000 UTC m=+2031.546582775" lastFinishedPulling="2025-12-03 00:40:10.274123684 +0000 UTC m=+2034.123086290" observedRunningTime="2025-12-03 00:40:10.794230448 +0000 UTC m=+2034.643193074" watchObservedRunningTime="2025-12-03 00:40:10.796804391 +0000 UTC m=+2034.645767017" Dec 03 00:40:11 crc kubenswrapper[4805]: I1203 00:40:11.891983 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4wbn8" Dec 03 00:40:11 crc kubenswrapper[4805]: I1203 00:40:11.892578 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4wbn8" Dec 03 00:40:12 crc kubenswrapper[4805]: I1203 00:40:12.952594 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4wbn8" podUID="a8e0f275-4e72-4f83-88d7-9ba83fadccb1" containerName="registry-server" probeResult="failure" output=< Dec 03 00:40:12 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Dec 03 00:40:12 crc kubenswrapper[4805]: > Dec 03 00:40:21 crc kubenswrapper[4805]: I1203 00:40:21.969578 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4wbn8" Dec 03 00:40:22 crc kubenswrapper[4805]: I1203 00:40:22.055131 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4wbn8" Dec 03 00:40:22 crc kubenswrapper[4805]: I1203 00:40:22.214413 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4wbn8"] Dec 03 00:40:23 crc kubenswrapper[4805]: I1203 00:40:23.891250 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4wbn8" podUID="a8e0f275-4e72-4f83-88d7-9ba83fadccb1" containerName="registry-server" containerID="cri-o://e3c5073d5f5e9bed6742e03544bca4503c25a004053792b6c1f0641af34fc701" gracePeriod=2 Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.303333 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4wbn8" Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.314106 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e0f275-4e72-4f83-88d7-9ba83fadccb1-catalog-content\") pod \"a8e0f275-4e72-4f83-88d7-9ba83fadccb1\" (UID: \"a8e0f275-4e72-4f83-88d7-9ba83fadccb1\") " Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.314171 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s62b6\" (UniqueName: \"kubernetes.io/projected/a8e0f275-4e72-4f83-88d7-9ba83fadccb1-kube-api-access-s62b6\") pod \"a8e0f275-4e72-4f83-88d7-9ba83fadccb1\" (UID: \"a8e0f275-4e72-4f83-88d7-9ba83fadccb1\") " Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.314275 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e0f275-4e72-4f83-88d7-9ba83fadccb1-utilities\") pod \"a8e0f275-4e72-4f83-88d7-9ba83fadccb1\" (UID: \"a8e0f275-4e72-4f83-88d7-9ba83fadccb1\") " Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.315263 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8e0f275-4e72-4f83-88d7-9ba83fadccb1-utilities" (OuterVolumeSpecName: "utilities") pod "a8e0f275-4e72-4f83-88d7-9ba83fadccb1" (UID: "a8e0f275-4e72-4f83-88d7-9ba83fadccb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.321473 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e0f275-4e72-4f83-88d7-9ba83fadccb1-kube-api-access-s62b6" (OuterVolumeSpecName: "kube-api-access-s62b6") pod "a8e0f275-4e72-4f83-88d7-9ba83fadccb1" (UID: "a8e0f275-4e72-4f83-88d7-9ba83fadccb1"). InnerVolumeSpecName "kube-api-access-s62b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.415924 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s62b6\" (UniqueName: \"kubernetes.io/projected/a8e0f275-4e72-4f83-88d7-9ba83fadccb1-kube-api-access-s62b6\") on node \"crc\" DevicePath \"\"" Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.415958 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e0f275-4e72-4f83-88d7-9ba83fadccb1-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.436593 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8e0f275-4e72-4f83-88d7-9ba83fadccb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8e0f275-4e72-4f83-88d7-9ba83fadccb1" (UID: "a8e0f275-4e72-4f83-88d7-9ba83fadccb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.516769 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e0f275-4e72-4f83-88d7-9ba83fadccb1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.902275 4805 generic.go:334] "Generic (PLEG): container finished" podID="a8e0f275-4e72-4f83-88d7-9ba83fadccb1" containerID="e3c5073d5f5e9bed6742e03544bca4503c25a004053792b6c1f0641af34fc701" exitCode=0 Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.902334 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4wbn8" event={"ID":"a8e0f275-4e72-4f83-88d7-9ba83fadccb1","Type":"ContainerDied","Data":"e3c5073d5f5e9bed6742e03544bca4503c25a004053792b6c1f0641af34fc701"} Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.902371 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4wbn8" event={"ID":"a8e0f275-4e72-4f83-88d7-9ba83fadccb1","Type":"ContainerDied","Data":"bc2b1bd5cc0351c28fb3a966262e3025a2e3d5775d853c8e0c0c8afe5425ec52"} Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.902396 4805 scope.go:117] "RemoveContainer" containerID="e3c5073d5f5e9bed6742e03544bca4503c25a004053792b6c1f0641af34fc701" Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.902451 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4wbn8" Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.940800 4805 scope.go:117] "RemoveContainer" containerID="0bcb595489fab797b15b60c1ed3291c828b54629018041000de9a791ad19fe40" Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.941795 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4wbn8"] Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.951035 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4wbn8"] Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.964410 4805 scope.go:117] "RemoveContainer" containerID="1a3fbb6f1a91b5d9284f04b3382c3ece9c20a6f1947b116b8ea9f96fc08ca9be" Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.987326 4805 scope.go:117] "RemoveContainer" containerID="e3c5073d5f5e9bed6742e03544bca4503c25a004053792b6c1f0641af34fc701" Dec 03 00:40:24 crc kubenswrapper[4805]: E1203 00:40:24.988108 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3c5073d5f5e9bed6742e03544bca4503c25a004053792b6c1f0641af34fc701\": container with ID starting with e3c5073d5f5e9bed6742e03544bca4503c25a004053792b6c1f0641af34fc701 not found: ID does not exist" containerID="e3c5073d5f5e9bed6742e03544bca4503c25a004053792b6c1f0641af34fc701" Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.988233 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3c5073d5f5e9bed6742e03544bca4503c25a004053792b6c1f0641af34fc701"} err="failed to get container status \"e3c5073d5f5e9bed6742e03544bca4503c25a004053792b6c1f0641af34fc701\": rpc error: code = NotFound desc = could not find container \"e3c5073d5f5e9bed6742e03544bca4503c25a004053792b6c1f0641af34fc701\": container with ID starting with e3c5073d5f5e9bed6742e03544bca4503c25a004053792b6c1f0641af34fc701 not found: ID does not exist" Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.988300 4805 scope.go:117] "RemoveContainer" containerID="0bcb595489fab797b15b60c1ed3291c828b54629018041000de9a791ad19fe40" Dec 03 00:40:24 crc kubenswrapper[4805]: E1203 00:40:24.988792 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bcb595489fab797b15b60c1ed3291c828b54629018041000de9a791ad19fe40\": container with ID starting with 0bcb595489fab797b15b60c1ed3291c828b54629018041000de9a791ad19fe40 not found: ID does not exist" containerID="0bcb595489fab797b15b60c1ed3291c828b54629018041000de9a791ad19fe40" Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.988850 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bcb595489fab797b15b60c1ed3291c828b54629018041000de9a791ad19fe40"} err="failed to get container status \"0bcb595489fab797b15b60c1ed3291c828b54629018041000de9a791ad19fe40\": rpc error: code = NotFound desc = could not find container \"0bcb595489fab797b15b60c1ed3291c828b54629018041000de9a791ad19fe40\": container with ID starting with 0bcb595489fab797b15b60c1ed3291c828b54629018041000de9a791ad19fe40 not found: ID does not exist" Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.988879 4805 scope.go:117] "RemoveContainer" containerID="1a3fbb6f1a91b5d9284f04b3382c3ece9c20a6f1947b116b8ea9f96fc08ca9be" Dec 03 00:40:24 crc kubenswrapper[4805]: E1203 00:40:24.989412 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a3fbb6f1a91b5d9284f04b3382c3ece9c20a6f1947b116b8ea9f96fc08ca9be\": container with ID starting with 1a3fbb6f1a91b5d9284f04b3382c3ece9c20a6f1947b116b8ea9f96fc08ca9be not found: ID does not exist" containerID="1a3fbb6f1a91b5d9284f04b3382c3ece9c20a6f1947b116b8ea9f96fc08ca9be" Dec 03 00:40:24 crc kubenswrapper[4805]: I1203 00:40:24.989452 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a3fbb6f1a91b5d9284f04b3382c3ece9c20a6f1947b116b8ea9f96fc08ca9be"} err="failed to get container status \"1a3fbb6f1a91b5d9284f04b3382c3ece9c20a6f1947b116b8ea9f96fc08ca9be\": rpc error: code = NotFound desc = could not find container \"1a3fbb6f1a91b5d9284f04b3382c3ece9c20a6f1947b116b8ea9f96fc08ca9be\": container with ID starting with 1a3fbb6f1a91b5d9284f04b3382c3ece9c20a6f1947b116b8ea9f96fc08ca9be not found: ID does not exist" Dec 03 00:40:26 crc kubenswrapper[4805]: I1203 00:40:26.435407 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8e0f275-4e72-4f83-88d7-9ba83fadccb1" path="/var/lib/kubelet/pods/a8e0f275-4e72-4f83-88d7-9ba83fadccb1/volumes" Dec 03 00:40:54 crc kubenswrapper[4805]: I1203 00:40:54.682389 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-7sxvt_2cd88982-bc5c-4811-9794-b7342f16d887/control-plane-machine-set-operator/0.log" Dec 03 00:40:54 crc kubenswrapper[4805]: I1203 00:40:54.881279 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gwpvv_b2409836-005c-4e36-ae98-14a3053117d1/kube-rbac-proxy/0.log" Dec 03 00:40:54 crc kubenswrapper[4805]: I1203 00:40:54.899804 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gwpvv_b2409836-005c-4e36-ae98-14a3053117d1/machine-api-operator/0.log" Dec 03 00:41:06 crc kubenswrapper[4805]: I1203 00:41:06.909164 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b4c6j"] Dec 03 00:41:06 crc kubenswrapper[4805]: E1203 00:41:06.910038 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e0f275-4e72-4f83-88d7-9ba83fadccb1" containerName="extract-utilities" Dec 03 00:41:06 crc kubenswrapper[4805]: I1203 00:41:06.910051 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e0f275-4e72-4f83-88d7-9ba83fadccb1" containerName="extract-utilities" Dec 03 00:41:06 crc kubenswrapper[4805]: E1203 00:41:06.910079 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e0f275-4e72-4f83-88d7-9ba83fadccb1" containerName="registry-server" Dec 03 00:41:06 crc kubenswrapper[4805]: I1203 00:41:06.910086 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e0f275-4e72-4f83-88d7-9ba83fadccb1" containerName="registry-server" Dec 03 00:41:06 crc kubenswrapper[4805]: E1203 00:41:06.910094 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a610e296-cdb4-45c3-a7a9-fe1656e056b3" containerName="registry-server" Dec 03 00:41:06 crc kubenswrapper[4805]: I1203 00:41:06.910101 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a610e296-cdb4-45c3-a7a9-fe1656e056b3" containerName="registry-server" Dec 03 00:41:06 crc kubenswrapper[4805]: E1203 00:41:06.910108 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e0f275-4e72-4f83-88d7-9ba83fadccb1" containerName="extract-content" Dec 03 00:41:06 crc kubenswrapper[4805]: I1203 00:41:06.910114 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e0f275-4e72-4f83-88d7-9ba83fadccb1" containerName="extract-content" Dec 03 00:41:06 crc kubenswrapper[4805]: I1203 00:41:06.910285 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a610e296-cdb4-45c3-a7a9-fe1656e056b3" containerName="registry-server" Dec 03 00:41:06 crc kubenswrapper[4805]: I1203 00:41:06.910301 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e0f275-4e72-4f83-88d7-9ba83fadccb1" containerName="registry-server" Dec 03 00:41:06 crc kubenswrapper[4805]: I1203 00:41:06.911424 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4c6j" Dec 03 00:41:06 crc kubenswrapper[4805]: I1203 00:41:06.926844 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b4c6j"] Dec 03 00:41:07 crc kubenswrapper[4805]: I1203 00:41:07.004505 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69116690-6acc-4d0e-8bdb-d39e5285c5dd-catalog-content\") pod \"certified-operators-b4c6j\" (UID: \"69116690-6acc-4d0e-8bdb-d39e5285c5dd\") " pod="openshift-marketplace/certified-operators-b4c6j" Dec 03 00:41:07 crc kubenswrapper[4805]: I1203 00:41:07.004588 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69116690-6acc-4d0e-8bdb-d39e5285c5dd-utilities\") pod \"certified-operators-b4c6j\" (UID: \"69116690-6acc-4d0e-8bdb-d39e5285c5dd\") " pod="openshift-marketplace/certified-operators-b4c6j" Dec 03 00:41:07 crc kubenswrapper[4805]: I1203 00:41:07.004807 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkc57\" (UniqueName: \"kubernetes.io/projected/69116690-6acc-4d0e-8bdb-d39e5285c5dd-kube-api-access-dkc57\") pod \"certified-operators-b4c6j\" (UID: \"69116690-6acc-4d0e-8bdb-d39e5285c5dd\") " pod="openshift-marketplace/certified-operators-b4c6j" Dec 03 00:41:07 crc kubenswrapper[4805]: I1203 00:41:07.106948 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkc57\" (UniqueName: \"kubernetes.io/projected/69116690-6acc-4d0e-8bdb-d39e5285c5dd-kube-api-access-dkc57\") pod \"certified-operators-b4c6j\" (UID: \"69116690-6acc-4d0e-8bdb-d39e5285c5dd\") " pod="openshift-marketplace/certified-operators-b4c6j" Dec 03 00:41:07 crc kubenswrapper[4805]: I1203 00:41:07.107180 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69116690-6acc-4d0e-8bdb-d39e5285c5dd-catalog-content\") pod \"certified-operators-b4c6j\" (UID: \"69116690-6acc-4d0e-8bdb-d39e5285c5dd\") " pod="openshift-marketplace/certified-operators-b4c6j" Dec 03 00:41:07 crc kubenswrapper[4805]: I1203 00:41:07.107434 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69116690-6acc-4d0e-8bdb-d39e5285c5dd-utilities\") pod \"certified-operators-b4c6j\" (UID: \"69116690-6acc-4d0e-8bdb-d39e5285c5dd\") " pod="openshift-marketplace/certified-operators-b4c6j" Dec 03 00:41:07 crc kubenswrapper[4805]: I1203 00:41:07.107850 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69116690-6acc-4d0e-8bdb-d39e5285c5dd-catalog-content\") pod \"certified-operators-b4c6j\" (UID: \"69116690-6acc-4d0e-8bdb-d39e5285c5dd\") " pod="openshift-marketplace/certified-operators-b4c6j" Dec 03 00:41:07 crc kubenswrapper[4805]: I1203 00:41:07.108150 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69116690-6acc-4d0e-8bdb-d39e5285c5dd-utilities\") pod \"certified-operators-b4c6j\" (UID: \"69116690-6acc-4d0e-8bdb-d39e5285c5dd\") " pod="openshift-marketplace/certified-operators-b4c6j" Dec 03 00:41:07 crc kubenswrapper[4805]: I1203 00:41:07.157187 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkc57\" (UniqueName: \"kubernetes.io/projected/69116690-6acc-4d0e-8bdb-d39e5285c5dd-kube-api-access-dkc57\") pod \"certified-operators-b4c6j\" (UID: \"69116690-6acc-4d0e-8bdb-d39e5285c5dd\") " pod="openshift-marketplace/certified-operators-b4c6j" Dec 03 00:41:07 crc kubenswrapper[4805]: I1203 00:41:07.236209 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4c6j" Dec 03 00:41:07 crc kubenswrapper[4805]: I1203 00:41:07.764999 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b4c6j"] Dec 03 00:41:08 crc kubenswrapper[4805]: I1203 00:41:08.372479 4805 generic.go:334] "Generic (PLEG): container finished" podID="69116690-6acc-4d0e-8bdb-d39e5285c5dd" containerID="5c4d56213daaa0e178340b9f01818c3bbad354229f61c1de1f947e6db894d227" exitCode=0 Dec 03 00:41:08 crc kubenswrapper[4805]: I1203 00:41:08.372549 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4c6j" event={"ID":"69116690-6acc-4d0e-8bdb-d39e5285c5dd","Type":"ContainerDied","Data":"5c4d56213daaa0e178340b9f01818c3bbad354229f61c1de1f947e6db894d227"} Dec 03 00:41:08 crc kubenswrapper[4805]: I1203 00:41:08.372614 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4c6j" event={"ID":"69116690-6acc-4d0e-8bdb-d39e5285c5dd","Type":"ContainerStarted","Data":"e12c33edd15a3a3e922f1a6a8329e20e3bb1975d003726c512b9cddc3b24df85"} Dec 03 00:41:09 crc kubenswrapper[4805]: I1203 00:41:09.382715 4805 generic.go:334] "Generic (PLEG): container finished" podID="69116690-6acc-4d0e-8bdb-d39e5285c5dd" containerID="ad01ab9f1dcf257db0271b21bb66df46be1403d47ef1b61d10f4eac3474a7d8f" exitCode=0 Dec 03 00:41:09 crc kubenswrapper[4805]: I1203 00:41:09.382807 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4c6j" event={"ID":"69116690-6acc-4d0e-8bdb-d39e5285c5dd","Type":"ContainerDied","Data":"ad01ab9f1dcf257db0271b21bb66df46be1403d47ef1b61d10f4eac3474a7d8f"} Dec 03 00:41:09 crc kubenswrapper[4805]: I1203 00:41:09.697422 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-nplwz_62abc6ca-92f7-46ac-b22d-333dfa327640/cert-manager-controller/0.log" Dec 03 00:41:09 crc kubenswrapper[4805]: I1203 00:41:09.862080 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-2w8gg_76757abd-1789-4cfa-b495-18540305dd10/cert-manager-cainjector/0.log" Dec 03 00:41:10 crc kubenswrapper[4805]: I1203 00:41:10.108846 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-bw2pp_e4899534-51b2-4ed8-81e4-ea8a7c0f55ac/cert-manager-webhook/0.log" Dec 03 00:41:10 crc kubenswrapper[4805]: I1203 00:41:10.392215 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4c6j" event={"ID":"69116690-6acc-4d0e-8bdb-d39e5285c5dd","Type":"ContainerStarted","Data":"81695a9a044a271208fb6994c1dee5ee7598c0c6291c69b741e7a803633b83a4"} Dec 03 00:41:10 crc kubenswrapper[4805]: I1203 00:41:10.420171 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b4c6j" podStartSLOduration=2.993709802 podStartE2EDuration="4.420144907s" podCreationTimestamp="2025-12-03 00:41:06 +0000 UTC" firstStartedPulling="2025-12-03 00:41:08.375526305 +0000 UTC m=+2092.224488921" lastFinishedPulling="2025-12-03 00:41:09.80196142 +0000 UTC m=+2093.650924026" observedRunningTime="2025-12-03 00:41:10.41286004 +0000 UTC m=+2094.261822646" watchObservedRunningTime="2025-12-03 00:41:10.420144907 +0000 UTC m=+2094.269107513" Dec 03 00:41:17 crc kubenswrapper[4805]: I1203 00:41:17.237127 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b4c6j" Dec 03 00:41:17 crc kubenswrapper[4805]: I1203 00:41:17.238958 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b4c6j" Dec 03 00:41:17 crc kubenswrapper[4805]: I1203 00:41:17.291848 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b4c6j" Dec 03 00:41:17 crc kubenswrapper[4805]: I1203 00:41:17.512811 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b4c6j" Dec 03 00:41:17 crc kubenswrapper[4805]: I1203 00:41:17.564109 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b4c6j"] Dec 03 00:41:19 crc kubenswrapper[4805]: I1203 00:41:19.486408 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b4c6j" podUID="69116690-6acc-4d0e-8bdb-d39e5285c5dd" containerName="registry-server" containerID="cri-o://81695a9a044a271208fb6994c1dee5ee7598c0c6291c69b741e7a803633b83a4" gracePeriod=2 Dec 03 00:41:19 crc kubenswrapper[4805]: I1203 00:41:19.875604 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4c6j" Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.047927 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69116690-6acc-4d0e-8bdb-d39e5285c5dd-utilities\") pod \"69116690-6acc-4d0e-8bdb-d39e5285c5dd\" (UID: \"69116690-6acc-4d0e-8bdb-d39e5285c5dd\") " Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.048095 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkc57\" (UniqueName: \"kubernetes.io/projected/69116690-6acc-4d0e-8bdb-d39e5285c5dd-kube-api-access-dkc57\") pod \"69116690-6acc-4d0e-8bdb-d39e5285c5dd\" (UID: \"69116690-6acc-4d0e-8bdb-d39e5285c5dd\") " Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.048163 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69116690-6acc-4d0e-8bdb-d39e5285c5dd-catalog-content\") pod \"69116690-6acc-4d0e-8bdb-d39e5285c5dd\" (UID: \"69116690-6acc-4d0e-8bdb-d39e5285c5dd\") " Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.048755 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69116690-6acc-4d0e-8bdb-d39e5285c5dd-utilities" (OuterVolumeSpecName: "utilities") pod "69116690-6acc-4d0e-8bdb-d39e5285c5dd" (UID: "69116690-6acc-4d0e-8bdb-d39e5285c5dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.057545 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69116690-6acc-4d0e-8bdb-d39e5285c5dd-kube-api-access-dkc57" (OuterVolumeSpecName: "kube-api-access-dkc57") pod "69116690-6acc-4d0e-8bdb-d39e5285c5dd" (UID: "69116690-6acc-4d0e-8bdb-d39e5285c5dd"). InnerVolumeSpecName "kube-api-access-dkc57". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.097768 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69116690-6acc-4d0e-8bdb-d39e5285c5dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69116690-6acc-4d0e-8bdb-d39e5285c5dd" (UID: "69116690-6acc-4d0e-8bdb-d39e5285c5dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.150503 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69116690-6acc-4d0e-8bdb-d39e5285c5dd-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.150575 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkc57\" (UniqueName: \"kubernetes.io/projected/69116690-6acc-4d0e-8bdb-d39e5285c5dd-kube-api-access-dkc57\") on node \"crc\" DevicePath \"\"" Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.150598 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69116690-6acc-4d0e-8bdb-d39e5285c5dd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.506577 4805 generic.go:334] "Generic (PLEG): container finished" podID="69116690-6acc-4d0e-8bdb-d39e5285c5dd" containerID="81695a9a044a271208fb6994c1dee5ee7598c0c6291c69b741e7a803633b83a4" exitCode=0 Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.506635 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4c6j" event={"ID":"69116690-6acc-4d0e-8bdb-d39e5285c5dd","Type":"ContainerDied","Data":"81695a9a044a271208fb6994c1dee5ee7598c0c6291c69b741e7a803633b83a4"} Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.506682 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4c6j" event={"ID":"69116690-6acc-4d0e-8bdb-d39e5285c5dd","Type":"ContainerDied","Data":"e12c33edd15a3a3e922f1a6a8329e20e3bb1975d003726c512b9cddc3b24df85"} Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.506706 4805 scope.go:117] "RemoveContainer" containerID="81695a9a044a271208fb6994c1dee5ee7598c0c6291c69b741e7a803633b83a4" Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.506758 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4c6j" Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.531865 4805 scope.go:117] "RemoveContainer" containerID="ad01ab9f1dcf257db0271b21bb66df46be1403d47ef1b61d10f4eac3474a7d8f" Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.539658 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b4c6j"] Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.549053 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b4c6j"] Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.559769 4805 scope.go:117] "RemoveContainer" containerID="5c4d56213daaa0e178340b9f01818c3bbad354229f61c1de1f947e6db894d227" Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.608027 4805 scope.go:117] "RemoveContainer" containerID="81695a9a044a271208fb6994c1dee5ee7598c0c6291c69b741e7a803633b83a4" Dec 03 00:41:20 crc kubenswrapper[4805]: E1203 00:41:20.609503 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81695a9a044a271208fb6994c1dee5ee7598c0c6291c69b741e7a803633b83a4\": container with ID starting with 81695a9a044a271208fb6994c1dee5ee7598c0c6291c69b741e7a803633b83a4 not found: ID does not exist" containerID="81695a9a044a271208fb6994c1dee5ee7598c0c6291c69b741e7a803633b83a4" Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.609577 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81695a9a044a271208fb6994c1dee5ee7598c0c6291c69b741e7a803633b83a4"} err="failed to get container status \"81695a9a044a271208fb6994c1dee5ee7598c0c6291c69b741e7a803633b83a4\": rpc error: code = NotFound desc = could not find container \"81695a9a044a271208fb6994c1dee5ee7598c0c6291c69b741e7a803633b83a4\": container with ID starting with 81695a9a044a271208fb6994c1dee5ee7598c0c6291c69b741e7a803633b83a4 not found: ID does not exist" Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.609621 4805 scope.go:117] "RemoveContainer" containerID="ad01ab9f1dcf257db0271b21bb66df46be1403d47ef1b61d10f4eac3474a7d8f" Dec 03 00:41:20 crc kubenswrapper[4805]: E1203 00:41:20.610381 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad01ab9f1dcf257db0271b21bb66df46be1403d47ef1b61d10f4eac3474a7d8f\": container with ID starting with ad01ab9f1dcf257db0271b21bb66df46be1403d47ef1b61d10f4eac3474a7d8f not found: ID does not exist" containerID="ad01ab9f1dcf257db0271b21bb66df46be1403d47ef1b61d10f4eac3474a7d8f" Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.610433 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad01ab9f1dcf257db0271b21bb66df46be1403d47ef1b61d10f4eac3474a7d8f"} err="failed to get container status \"ad01ab9f1dcf257db0271b21bb66df46be1403d47ef1b61d10f4eac3474a7d8f\": rpc error: code = NotFound desc = could not find container \"ad01ab9f1dcf257db0271b21bb66df46be1403d47ef1b61d10f4eac3474a7d8f\": container with ID starting with ad01ab9f1dcf257db0271b21bb66df46be1403d47ef1b61d10f4eac3474a7d8f not found: ID does not exist" Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.610470 4805 scope.go:117] "RemoveContainer" containerID="5c4d56213daaa0e178340b9f01818c3bbad354229f61c1de1f947e6db894d227" Dec 03 00:41:20 crc kubenswrapper[4805]: E1203 00:41:20.610854 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c4d56213daaa0e178340b9f01818c3bbad354229f61c1de1f947e6db894d227\": container with ID starting with 5c4d56213daaa0e178340b9f01818c3bbad354229f61c1de1f947e6db894d227 not found: ID does not exist" containerID="5c4d56213daaa0e178340b9f01818c3bbad354229f61c1de1f947e6db894d227" Dec 03 00:41:20 crc kubenswrapper[4805]: I1203 00:41:20.610902 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c4d56213daaa0e178340b9f01818c3bbad354229f61c1de1f947e6db894d227"} err="failed to get container status \"5c4d56213daaa0e178340b9f01818c3bbad354229f61c1de1f947e6db894d227\": rpc error: code = NotFound desc = could not find container \"5c4d56213daaa0e178340b9f01818c3bbad354229f61c1de1f947e6db894d227\": container with ID starting with 5c4d56213daaa0e178340b9f01818c3bbad354229f61c1de1f947e6db894d227 not found: ID does not exist" Dec 03 00:41:22 crc kubenswrapper[4805]: I1203 00:41:22.438389 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69116690-6acc-4d0e-8bdb-d39e5285c5dd" path="/var/lib/kubelet/pods/69116690-6acc-4d0e-8bdb-d39e5285c5dd/volumes" Dec 03 00:41:27 crc kubenswrapper[4805]: I1203 00:41:27.151587 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz_fc243a86-5fa7-4260-8c90-92eaaac927fe/util/0.log" Dec 03 00:41:27 crc kubenswrapper[4805]: I1203 00:41:27.342386 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz_fc243a86-5fa7-4260-8c90-92eaaac927fe/pull/0.log" Dec 03 00:41:27 crc kubenswrapper[4805]: I1203 00:41:27.356365 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz_fc243a86-5fa7-4260-8c90-92eaaac927fe/util/0.log" Dec 03 00:41:27 crc kubenswrapper[4805]: I1203 00:41:27.427045 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz_fc243a86-5fa7-4260-8c90-92eaaac927fe/pull/0.log" Dec 03 00:41:27 crc kubenswrapper[4805]: I1203 00:41:27.609878 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz_fc243a86-5fa7-4260-8c90-92eaaac927fe/util/0.log" Dec 03 00:41:27 crc kubenswrapper[4805]: I1203 00:41:27.609903 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz_fc243a86-5fa7-4260-8c90-92eaaac927fe/pull/0.log" Dec 03 00:41:27 crc kubenswrapper[4805]: I1203 00:41:27.640309 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ad8ktz_fc243a86-5fa7-4260-8c90-92eaaac927fe/extract/0.log" Dec 03 00:41:27 crc kubenswrapper[4805]: I1203 00:41:27.797097 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk_a6db602f-7a39-4cad-83c5-4c13ef73feb5/util/0.log" Dec 03 00:41:27 crc kubenswrapper[4805]: I1203 00:41:27.955023 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk_a6db602f-7a39-4cad-83c5-4c13ef73feb5/util/0.log" Dec 03 00:41:28 crc kubenswrapper[4805]: I1203 00:41:28.025092 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk_a6db602f-7a39-4cad-83c5-4c13ef73feb5/pull/0.log" Dec 03 00:41:28 crc kubenswrapper[4805]: I1203 00:41:28.059796 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk_a6db602f-7a39-4cad-83c5-4c13ef73feb5/pull/0.log" Dec 03 00:41:28 crc kubenswrapper[4805]: I1203 00:41:28.228417 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk_a6db602f-7a39-4cad-83c5-4c13ef73feb5/util/0.log" Dec 03 00:41:28 crc kubenswrapper[4805]: I1203 00:41:28.241902 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk_a6db602f-7a39-4cad-83c5-4c13ef73feb5/pull/0.log" Dec 03 00:41:28 crc kubenswrapper[4805]: I1203 00:41:28.307251 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210x6fsk_a6db602f-7a39-4cad-83c5-4c13ef73feb5/extract/0.log" Dec 03 00:41:28 crc kubenswrapper[4805]: I1203 00:41:28.419332 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr_11b61252-9fc1-4387-a598-411f2b3c2833/util/0.log" Dec 03 00:41:28 crc kubenswrapper[4805]: I1203 00:41:28.635643 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr_11b61252-9fc1-4387-a598-411f2b3c2833/util/0.log" Dec 03 00:41:28 crc kubenswrapper[4805]: I1203 00:41:28.644033 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr_11b61252-9fc1-4387-a598-411f2b3c2833/pull/0.log" Dec 03 00:41:28 crc kubenswrapper[4805]: I1203 00:41:28.683077 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr_11b61252-9fc1-4387-a598-411f2b3c2833/pull/0.log" Dec 03 00:41:28 crc kubenswrapper[4805]: I1203 00:41:28.838880 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr_11b61252-9fc1-4387-a598-411f2b3c2833/util/0.log" Dec 03 00:41:28 crc kubenswrapper[4805]: I1203 00:41:28.844358 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr_11b61252-9fc1-4387-a598-411f2b3c2833/pull/0.log" Dec 03 00:41:28 crc kubenswrapper[4805]: I1203 00:41:28.853057 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftp9qr_11b61252-9fc1-4387-a598-411f2b3c2833/extract/0.log" Dec 03 00:41:29 crc kubenswrapper[4805]: I1203 00:41:29.029559 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td_dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4/util/0.log" Dec 03 00:41:29 crc kubenswrapper[4805]: I1203 00:41:29.187882 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td_dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4/pull/0.log" Dec 03 00:41:29 crc kubenswrapper[4805]: I1203 00:41:29.188982 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td_dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4/pull/0.log" Dec 03 00:41:29 crc kubenswrapper[4805]: I1203 00:41:29.208006 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td_dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4/util/0.log" Dec 03 00:41:29 crc kubenswrapper[4805]: I1203 00:41:29.400837 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td_dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4/util/0.log" Dec 03 00:41:29 crc kubenswrapper[4805]: I1203 00:41:29.432354 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td_dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4/pull/0.log" Dec 03 00:41:29 crc kubenswrapper[4805]: I1203 00:41:29.476883 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejn9td_dd3972f3-a16b-4fec-8bfd-a0bebf5c65c4/extract/0.log" Dec 03 00:41:29 crc kubenswrapper[4805]: I1203 00:41:29.616252 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sn25s_566e3605-01a5-487e-9152-a9de0f1aa9e7/extract-utilities/0.log" Dec 03 00:41:29 crc kubenswrapper[4805]: I1203 00:41:29.774724 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sn25s_566e3605-01a5-487e-9152-a9de0f1aa9e7/extract-content/0.log" Dec 03 00:41:29 crc kubenswrapper[4805]: I1203 00:41:29.801032 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sn25s_566e3605-01a5-487e-9152-a9de0f1aa9e7/extract-utilities/0.log" Dec 03 00:41:29 crc kubenswrapper[4805]: I1203 00:41:29.829344 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sn25s_566e3605-01a5-487e-9152-a9de0f1aa9e7/extract-content/0.log" Dec 03 00:41:29 crc kubenswrapper[4805]: I1203 00:41:29.991476 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sn25s_566e3605-01a5-487e-9152-a9de0f1aa9e7/extract-utilities/0.log" Dec 03 00:41:30 crc kubenswrapper[4805]: I1203 00:41:30.009719 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sn25s_566e3605-01a5-487e-9152-a9de0f1aa9e7/extract-content/0.log" Dec 03 00:41:30 crc kubenswrapper[4805]: I1203 00:41:30.227411 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wcfkh_d07c760a-7a6a-48a8-aec2-4beb15f31c70/extract-utilities/0.log" Dec 03 00:41:30 crc kubenswrapper[4805]: I1203 00:41:30.406979 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sn25s_566e3605-01a5-487e-9152-a9de0f1aa9e7/registry-server/0.log" Dec 03 00:41:30 crc kubenswrapper[4805]: I1203 00:41:30.452770 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wcfkh_d07c760a-7a6a-48a8-aec2-4beb15f31c70/extract-content/0.log" Dec 03 00:41:30 crc kubenswrapper[4805]: I1203 00:41:30.463607 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wcfkh_d07c760a-7a6a-48a8-aec2-4beb15f31c70/extract-utilities/0.log" Dec 03 00:41:30 crc kubenswrapper[4805]: I1203 00:41:30.478847 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wcfkh_d07c760a-7a6a-48a8-aec2-4beb15f31c70/extract-content/0.log" Dec 03 00:41:30 crc kubenswrapper[4805]: I1203 00:41:30.698728 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wcfkh_d07c760a-7a6a-48a8-aec2-4beb15f31c70/extract-content/0.log" Dec 03 00:41:30 crc kubenswrapper[4805]: I1203 00:41:30.707313 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wcfkh_d07c760a-7a6a-48a8-aec2-4beb15f31c70/extract-utilities/0.log" Dec 03 00:41:30 crc kubenswrapper[4805]: I1203 00:41:30.808418 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cks6r_cad4d2ba-cd8b-4886-90cd-ff09fdbc6206/marketplace-operator/0.log" Dec 03 00:41:30 crc kubenswrapper[4805]: I1203 00:41:30.962689 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hsx6d_b90bcdd7-2155-4f6f-bad9-19cea6e78c63/extract-utilities/0.log" Dec 03 00:41:31 crc kubenswrapper[4805]: I1203 00:41:31.082894 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wcfkh_d07c760a-7a6a-48a8-aec2-4beb15f31c70/registry-server/0.log" Dec 03 00:41:31 crc kubenswrapper[4805]: I1203 00:41:31.191545 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hsx6d_b90bcdd7-2155-4f6f-bad9-19cea6e78c63/extract-utilities/0.log" Dec 03 00:41:31 crc kubenswrapper[4805]: I1203 00:41:31.211075 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hsx6d_b90bcdd7-2155-4f6f-bad9-19cea6e78c63/extract-content/0.log" Dec 03 00:41:31 crc kubenswrapper[4805]: I1203 00:41:31.213389 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hsx6d_b90bcdd7-2155-4f6f-bad9-19cea6e78c63/extract-content/0.log" Dec 03 00:41:31 crc kubenswrapper[4805]: I1203 00:41:31.433371 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hsx6d_b90bcdd7-2155-4f6f-bad9-19cea6e78c63/extract-utilities/0.log" Dec 03 00:41:31 crc kubenswrapper[4805]: I1203 00:41:31.436491 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hsx6d_b90bcdd7-2155-4f6f-bad9-19cea6e78c63/extract-content/0.log" Dec 03 00:41:31 crc kubenswrapper[4805]: I1203 00:41:31.784919 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hsx6d_b90bcdd7-2155-4f6f-bad9-19cea6e78c63/registry-server/0.log" Dec 03 00:41:45 crc kubenswrapper[4805]: I1203 00:41:45.550591 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-n4dq8_164269e7-9f89-47f8-b363-bcb620782a98/prometheus-operator/0.log" Dec 03 00:41:45 crc kubenswrapper[4805]: I1203 00:41:45.711491 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-665c8bf78f-2srvq_dd60859c-d506-402e-90b3-22e44a9cde9a/prometheus-operator-admission-webhook/0.log" Dec 03 00:41:45 crc kubenswrapper[4805]: I1203 00:41:45.755585 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-665c8bf78f-mkcgl_c8d0d1b6-817f-4d74-8373-1e186de34888/prometheus-operator-admission-webhook/0.log" Dec 03 00:41:45 crc kubenswrapper[4805]: I1203 00:41:45.977556 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-zb45p_c1a67691-4899-4efb-92fd-8e374caac92f/perses-operator/0.log" Dec 03 00:41:45 crc kubenswrapper[4805]: I1203 00:41:45.992741 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-sg4sp_283d9c09-ce9f-43b1-9849-49926b74fdb2/operator/0.log" Dec 03 00:41:47 crc kubenswrapper[4805]: I1203 00:41:47.810806 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:41:47 crc kubenswrapper[4805]: I1203 00:41:47.811314 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:42:17 crc kubenswrapper[4805]: I1203 00:42:17.811400 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:42:17 crc kubenswrapper[4805]: I1203 00:42:17.812260 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:42:38 crc kubenswrapper[4805]: I1203 00:42:38.272321 4805 generic.go:334] "Generic (PLEG): container finished" podID="19c834af-5a46-4b5c-8894-828fccef5d1d" containerID="628327dd6e2f9eedff2b99eb13ee58698ad9e5a49e55cfa5b2130af0710d1668" exitCode=0 Dec 03 00:42:38 crc kubenswrapper[4805]: I1203 00:42:38.272415 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w5d2j/must-gather-x7bk8" event={"ID":"19c834af-5a46-4b5c-8894-828fccef5d1d","Type":"ContainerDied","Data":"628327dd6e2f9eedff2b99eb13ee58698ad9e5a49e55cfa5b2130af0710d1668"} Dec 03 00:42:38 crc kubenswrapper[4805]: I1203 00:42:38.273567 4805 scope.go:117] "RemoveContainer" containerID="628327dd6e2f9eedff2b99eb13ee58698ad9e5a49e55cfa5b2130af0710d1668" Dec 03 00:42:38 crc kubenswrapper[4805]: I1203 00:42:38.555897 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-w5d2j_must-gather-x7bk8_19c834af-5a46-4b5c-8894-828fccef5d1d/gather/0.log" Dec 03 00:42:45 crc kubenswrapper[4805]: I1203 00:42:45.202334 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-w5d2j/must-gather-x7bk8"] Dec 03 00:42:45 crc kubenswrapper[4805]: I1203 00:42:45.203179 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-w5d2j/must-gather-x7bk8" podUID="19c834af-5a46-4b5c-8894-828fccef5d1d" containerName="copy" containerID="cri-o://08861393e6d0531ba2bcf7092207c42a6ec011358c7354d410b5958bb5bc83aa" gracePeriod=2 Dec 03 00:42:45 crc kubenswrapper[4805]: I1203 00:42:45.210349 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-w5d2j/must-gather-x7bk8"] Dec 03 00:42:45 crc kubenswrapper[4805]: I1203 00:42:45.326393 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-w5d2j_must-gather-x7bk8_19c834af-5a46-4b5c-8894-828fccef5d1d/copy/0.log" Dec 03 00:42:45 crc kubenswrapper[4805]: I1203 00:42:45.326938 4805 generic.go:334] "Generic (PLEG): container finished" podID="19c834af-5a46-4b5c-8894-828fccef5d1d" containerID="08861393e6d0531ba2bcf7092207c42a6ec011358c7354d410b5958bb5bc83aa" exitCode=143 Dec 03 00:42:45 crc kubenswrapper[4805]: I1203 00:42:45.581384 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-w5d2j_must-gather-x7bk8_19c834af-5a46-4b5c-8894-828fccef5d1d/copy/0.log" Dec 03 00:42:45 crc kubenswrapper[4805]: I1203 00:42:45.581860 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w5d2j/must-gather-x7bk8" Dec 03 00:42:45 crc kubenswrapper[4805]: I1203 00:42:45.613991 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/19c834af-5a46-4b5c-8894-828fccef5d1d-must-gather-output\") pod \"19c834af-5a46-4b5c-8894-828fccef5d1d\" (UID: \"19c834af-5a46-4b5c-8894-828fccef5d1d\") " Dec 03 00:42:45 crc kubenswrapper[4805]: I1203 00:42:45.614184 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5mgm\" (UniqueName: \"kubernetes.io/projected/19c834af-5a46-4b5c-8894-828fccef5d1d-kube-api-access-v5mgm\") pod \"19c834af-5a46-4b5c-8894-828fccef5d1d\" (UID: \"19c834af-5a46-4b5c-8894-828fccef5d1d\") " Dec 03 00:42:45 crc kubenswrapper[4805]: I1203 00:42:45.623604 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19c834af-5a46-4b5c-8894-828fccef5d1d-kube-api-access-v5mgm" (OuterVolumeSpecName: "kube-api-access-v5mgm") pod "19c834af-5a46-4b5c-8894-828fccef5d1d" (UID: "19c834af-5a46-4b5c-8894-828fccef5d1d"). InnerVolumeSpecName "kube-api-access-v5mgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:42:45 crc kubenswrapper[4805]: I1203 00:42:45.674706 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19c834af-5a46-4b5c-8894-828fccef5d1d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "19c834af-5a46-4b5c-8894-828fccef5d1d" (UID: "19c834af-5a46-4b5c-8894-828fccef5d1d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:42:45 crc kubenswrapper[4805]: I1203 00:42:45.715708 4805 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/19c834af-5a46-4b5c-8894-828fccef5d1d-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 03 00:42:45 crc kubenswrapper[4805]: I1203 00:42:45.715742 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5mgm\" (UniqueName: \"kubernetes.io/projected/19c834af-5a46-4b5c-8894-828fccef5d1d-kube-api-access-v5mgm\") on node \"crc\" DevicePath \"\"" Dec 03 00:42:46 crc kubenswrapper[4805]: I1203 00:42:46.338339 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-w5d2j_must-gather-x7bk8_19c834af-5a46-4b5c-8894-828fccef5d1d/copy/0.log" Dec 03 00:42:46 crc kubenswrapper[4805]: I1203 00:42:46.338715 4805 scope.go:117] "RemoveContainer" containerID="08861393e6d0531ba2bcf7092207c42a6ec011358c7354d410b5958bb5bc83aa" Dec 03 00:42:46 crc kubenswrapper[4805]: I1203 00:42:46.338845 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w5d2j/must-gather-x7bk8" Dec 03 00:42:46 crc kubenswrapper[4805]: I1203 00:42:46.368914 4805 scope.go:117] "RemoveContainer" containerID="628327dd6e2f9eedff2b99eb13ee58698ad9e5a49e55cfa5b2130af0710d1668" Dec 03 00:42:46 crc kubenswrapper[4805]: I1203 00:42:46.433400 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19c834af-5a46-4b5c-8894-828fccef5d1d" path="/var/lib/kubelet/pods/19c834af-5a46-4b5c-8894-828fccef5d1d/volumes" Dec 03 00:42:47 crc kubenswrapper[4805]: I1203 00:42:47.810968 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:42:47 crc kubenswrapper[4805]: I1203 00:42:47.811487 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:42:47 crc kubenswrapper[4805]: I1203 00:42:47.811543 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:42:47 crc kubenswrapper[4805]: I1203 00:42:47.812221 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a3ac1d228024bb4737ad8fb9c1bcabee904b7ab6322982ce04c396481575c75"} pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:42:47 crc kubenswrapper[4805]: I1203 00:42:47.812319 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" containerID="cri-o://5a3ac1d228024bb4737ad8fb9c1bcabee904b7ab6322982ce04c396481575c75" gracePeriod=600 Dec 03 00:42:48 crc kubenswrapper[4805]: I1203 00:42:48.358053 4805 generic.go:334] "Generic (PLEG): container finished" podID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerID="5a3ac1d228024bb4737ad8fb9c1bcabee904b7ab6322982ce04c396481575c75" exitCode=0 Dec 03 00:42:48 crc kubenswrapper[4805]: I1203 00:42:48.358108 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" event={"ID":"42d6da4d-d781-4243-b5c3-28a8cf91ef53","Type":"ContainerDied","Data":"5a3ac1d228024bb4737ad8fb9c1bcabee904b7ab6322982ce04c396481575c75"} Dec 03 00:42:48 crc kubenswrapper[4805]: I1203 00:42:48.358145 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" event={"ID":"42d6da4d-d781-4243-b5c3-28a8cf91ef53","Type":"ContainerStarted","Data":"5cd6fb35011f6d67f374842eb8e53dbe4677b80fc7df1958df14c2b165c73b30"} Dec 03 00:42:48 crc kubenswrapper[4805]: I1203 00:42:48.358167 4805 scope.go:117] "RemoveContainer" containerID="97ceaf60dcab65ddfddabf45548c8852f4e8961243b62510ab92c0a4b9476a18" Dec 03 00:44:27 crc kubenswrapper[4805]: I1203 00:44:27.054843 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-snp75"] Dec 03 00:44:27 crc kubenswrapper[4805]: E1203 00:44:27.060175 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69116690-6acc-4d0e-8bdb-d39e5285c5dd" containerName="registry-server" Dec 03 00:44:27 crc kubenswrapper[4805]: I1203 00:44:27.060248 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="69116690-6acc-4d0e-8bdb-d39e5285c5dd" containerName="registry-server" Dec 03 00:44:27 crc kubenswrapper[4805]: E1203 00:44:27.060290 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19c834af-5a46-4b5c-8894-828fccef5d1d" containerName="gather" Dec 03 00:44:27 crc kubenswrapper[4805]: I1203 00:44:27.060302 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="19c834af-5a46-4b5c-8894-828fccef5d1d" containerName="gather" Dec 03 00:44:27 crc kubenswrapper[4805]: E1203 00:44:27.060317 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69116690-6acc-4d0e-8bdb-d39e5285c5dd" containerName="extract-utilities" Dec 03 00:44:27 crc kubenswrapper[4805]: I1203 00:44:27.060330 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="69116690-6acc-4d0e-8bdb-d39e5285c5dd" containerName="extract-utilities" Dec 03 00:44:27 crc kubenswrapper[4805]: E1203 00:44:27.060356 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69116690-6acc-4d0e-8bdb-d39e5285c5dd" containerName="extract-content" Dec 03 00:44:27 crc kubenswrapper[4805]: I1203 00:44:27.060369 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="69116690-6acc-4d0e-8bdb-d39e5285c5dd" containerName="extract-content" Dec 03 00:44:27 crc kubenswrapper[4805]: E1203 00:44:27.060393 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19c834af-5a46-4b5c-8894-828fccef5d1d" containerName="copy" Dec 03 00:44:27 crc kubenswrapper[4805]: I1203 00:44:27.060408 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="19c834af-5a46-4b5c-8894-828fccef5d1d" containerName="copy" Dec 03 00:44:27 crc kubenswrapper[4805]: I1203 00:44:27.060676 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="69116690-6acc-4d0e-8bdb-d39e5285c5dd" containerName="registry-server" Dec 03 00:44:27 crc kubenswrapper[4805]: I1203 00:44:27.060710 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="19c834af-5a46-4b5c-8894-828fccef5d1d" containerName="gather" Dec 03 00:44:27 crc kubenswrapper[4805]: I1203 00:44:27.060743 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="19c834af-5a46-4b5c-8894-828fccef5d1d" containerName="copy" Dec 03 00:44:27 crc kubenswrapper[4805]: I1203 00:44:27.063172 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snp75" Dec 03 00:44:27 crc kubenswrapper[4805]: I1203 00:44:27.067924 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-snp75"] Dec 03 00:44:27 crc kubenswrapper[4805]: I1203 00:44:27.231078 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3f54656-993f-4526-b27f-f985ee343ab6-catalog-content\") pod \"community-operators-snp75\" (UID: \"b3f54656-993f-4526-b27f-f985ee343ab6\") " pod="openshift-marketplace/community-operators-snp75" Dec 03 00:44:27 crc kubenswrapper[4805]: I1203 00:44:27.231230 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mmb7\" (UniqueName: \"kubernetes.io/projected/b3f54656-993f-4526-b27f-f985ee343ab6-kube-api-access-5mmb7\") pod \"community-operators-snp75\" (UID: \"b3f54656-993f-4526-b27f-f985ee343ab6\") " pod="openshift-marketplace/community-operators-snp75" Dec 03 00:44:27 crc kubenswrapper[4805]: I1203 00:44:27.231389 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3f54656-993f-4526-b27f-f985ee343ab6-utilities\") pod \"community-operators-snp75\" (UID: \"b3f54656-993f-4526-b27f-f985ee343ab6\") " pod="openshift-marketplace/community-operators-snp75" Dec 03 00:44:27 crc kubenswrapper[4805]: I1203 00:44:27.334391 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3f54656-993f-4526-b27f-f985ee343ab6-catalog-content\") pod \"community-operators-snp75\" (UID: \"b3f54656-993f-4526-b27f-f985ee343ab6\") " pod="openshift-marketplace/community-operators-snp75" Dec 03 00:44:27 crc kubenswrapper[4805]: I1203 00:44:27.334557 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mmb7\" (UniqueName: \"kubernetes.io/projected/b3f54656-993f-4526-b27f-f985ee343ab6-kube-api-access-5mmb7\") pod \"community-operators-snp75\" (UID: \"b3f54656-993f-4526-b27f-f985ee343ab6\") " pod="openshift-marketplace/community-operators-snp75" Dec 03 00:44:27 crc kubenswrapper[4805]: I1203 00:44:27.334629 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3f54656-993f-4526-b27f-f985ee343ab6-utilities\") pod \"community-operators-snp75\" (UID: \"b3f54656-993f-4526-b27f-f985ee343ab6\") " pod="openshift-marketplace/community-operators-snp75" Dec 03 00:44:27 crc kubenswrapper[4805]: I1203 00:44:27.335426 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3f54656-993f-4526-b27f-f985ee343ab6-utilities\") pod \"community-operators-snp75\" (UID: \"b3f54656-993f-4526-b27f-f985ee343ab6\") " pod="openshift-marketplace/community-operators-snp75" Dec 03 00:44:27 crc kubenswrapper[4805]: I1203 00:44:27.336007 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3f54656-993f-4526-b27f-f985ee343ab6-catalog-content\") pod \"community-operators-snp75\" (UID: \"b3f54656-993f-4526-b27f-f985ee343ab6\") " pod="openshift-marketplace/community-operators-snp75" Dec 03 00:44:27 crc kubenswrapper[4805]: I1203 00:44:27.369774 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mmb7\" (UniqueName: \"kubernetes.io/projected/b3f54656-993f-4526-b27f-f985ee343ab6-kube-api-access-5mmb7\") pod \"community-operators-snp75\" (UID: \"b3f54656-993f-4526-b27f-f985ee343ab6\") " pod="openshift-marketplace/community-operators-snp75" Dec 03 00:44:27 crc kubenswrapper[4805]: I1203 00:44:27.399565 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snp75" Dec 03 00:44:27 crc kubenswrapper[4805]: I1203 00:44:27.729077 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-snp75"] Dec 03 00:44:28 crc kubenswrapper[4805]: I1203 00:44:28.360715 4805 generic.go:334] "Generic (PLEG): container finished" podID="b3f54656-993f-4526-b27f-f985ee343ab6" containerID="52fdd662ad849ef6819cbe87cd73016eba83f45e604c7dc99c48fe9a3ad69e96" exitCode=0 Dec 03 00:44:28 crc kubenswrapper[4805]: I1203 00:44:28.360772 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snp75" event={"ID":"b3f54656-993f-4526-b27f-f985ee343ab6","Type":"ContainerDied","Data":"52fdd662ad849ef6819cbe87cd73016eba83f45e604c7dc99c48fe9a3ad69e96"} Dec 03 00:44:28 crc kubenswrapper[4805]: I1203 00:44:28.360808 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snp75" event={"ID":"b3f54656-993f-4526-b27f-f985ee343ab6","Type":"ContainerStarted","Data":"5fe1b031a793566fa8418ad6de136033f736c0b8f6ca342185583f2cff4fe76b"} Dec 03 00:44:29 crc kubenswrapper[4805]: I1203 00:44:29.371618 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snp75" event={"ID":"b3f54656-993f-4526-b27f-f985ee343ab6","Type":"ContainerStarted","Data":"63eb4b29d46662ccf600e3dbd17b31980e738eb35f35f4b02795f83a3bbe3c91"} Dec 03 00:44:30 crc kubenswrapper[4805]: I1203 00:44:30.387133 4805 generic.go:334] "Generic (PLEG): container finished" podID="b3f54656-993f-4526-b27f-f985ee343ab6" containerID="63eb4b29d46662ccf600e3dbd17b31980e738eb35f35f4b02795f83a3bbe3c91" exitCode=0 Dec 03 00:44:30 crc kubenswrapper[4805]: I1203 00:44:30.387242 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snp75" event={"ID":"b3f54656-993f-4526-b27f-f985ee343ab6","Type":"ContainerDied","Data":"63eb4b29d46662ccf600e3dbd17b31980e738eb35f35f4b02795f83a3bbe3c91"} Dec 03 00:44:31 crc kubenswrapper[4805]: I1203 00:44:31.400707 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snp75" event={"ID":"b3f54656-993f-4526-b27f-f985ee343ab6","Type":"ContainerStarted","Data":"abd19131c05992fe2b5ca532bf846675eddbac2352226b6768c4be8d9386a315"} Dec 03 00:44:31 crc kubenswrapper[4805]: I1203 00:44:31.428865 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-snp75" podStartSLOduration=1.933733216 podStartE2EDuration="4.428833398s" podCreationTimestamp="2025-12-03 00:44:27 +0000 UTC" firstStartedPulling="2025-12-03 00:44:28.363476501 +0000 UTC m=+2292.212439107" lastFinishedPulling="2025-12-03 00:44:30.858576663 +0000 UTC m=+2294.707539289" observedRunningTime="2025-12-03 00:44:31.420150741 +0000 UTC m=+2295.269113357" watchObservedRunningTime="2025-12-03 00:44:31.428833398 +0000 UTC m=+2295.277796004" Dec 03 00:44:37 crc kubenswrapper[4805]: I1203 00:44:37.399889 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-snp75" Dec 03 00:44:37 crc kubenswrapper[4805]: I1203 00:44:37.400764 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-snp75" Dec 03 00:44:37 crc kubenswrapper[4805]: I1203 00:44:37.477362 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-snp75" Dec 03 00:44:37 crc kubenswrapper[4805]: I1203 00:44:37.561116 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-snp75" Dec 03 00:44:37 crc kubenswrapper[4805]: I1203 00:44:37.735959 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-snp75"] Dec 03 00:44:39 crc kubenswrapper[4805]: I1203 00:44:39.471344 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-snp75" podUID="b3f54656-993f-4526-b27f-f985ee343ab6" containerName="registry-server" containerID="cri-o://abd19131c05992fe2b5ca532bf846675eddbac2352226b6768c4be8d9386a315" gracePeriod=2 Dec 03 00:44:40 crc kubenswrapper[4805]: I1203 00:44:40.485631 4805 generic.go:334] "Generic (PLEG): container finished" podID="b3f54656-993f-4526-b27f-f985ee343ab6" containerID="abd19131c05992fe2b5ca532bf846675eddbac2352226b6768c4be8d9386a315" exitCode=0 Dec 03 00:44:40 crc kubenswrapper[4805]: I1203 00:44:40.486223 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snp75" event={"ID":"b3f54656-993f-4526-b27f-f985ee343ab6","Type":"ContainerDied","Data":"abd19131c05992fe2b5ca532bf846675eddbac2352226b6768c4be8d9386a315"} Dec 03 00:44:41 crc kubenswrapper[4805]: I1203 00:44:41.127675 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snp75" Dec 03 00:44:41 crc kubenswrapper[4805]: I1203 00:44:41.186136 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mmb7\" (UniqueName: \"kubernetes.io/projected/b3f54656-993f-4526-b27f-f985ee343ab6-kube-api-access-5mmb7\") pod \"b3f54656-993f-4526-b27f-f985ee343ab6\" (UID: \"b3f54656-993f-4526-b27f-f985ee343ab6\") " Dec 03 00:44:41 crc kubenswrapper[4805]: I1203 00:44:41.186212 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3f54656-993f-4526-b27f-f985ee343ab6-catalog-content\") pod \"b3f54656-993f-4526-b27f-f985ee343ab6\" (UID: \"b3f54656-993f-4526-b27f-f985ee343ab6\") " Dec 03 00:44:41 crc kubenswrapper[4805]: I1203 00:44:41.186399 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3f54656-993f-4526-b27f-f985ee343ab6-utilities\") pod \"b3f54656-993f-4526-b27f-f985ee343ab6\" (UID: \"b3f54656-993f-4526-b27f-f985ee343ab6\") " Dec 03 00:44:41 crc kubenswrapper[4805]: I1203 00:44:41.187438 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3f54656-993f-4526-b27f-f985ee343ab6-utilities" (OuterVolumeSpecName: "utilities") pod "b3f54656-993f-4526-b27f-f985ee343ab6" (UID: "b3f54656-993f-4526-b27f-f985ee343ab6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:44:41 crc kubenswrapper[4805]: I1203 00:44:41.193497 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3f54656-993f-4526-b27f-f985ee343ab6-kube-api-access-5mmb7" (OuterVolumeSpecName: "kube-api-access-5mmb7") pod "b3f54656-993f-4526-b27f-f985ee343ab6" (UID: "b3f54656-993f-4526-b27f-f985ee343ab6"). InnerVolumeSpecName "kube-api-access-5mmb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:44:41 crc kubenswrapper[4805]: I1203 00:44:41.240357 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3f54656-993f-4526-b27f-f985ee343ab6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3f54656-993f-4526-b27f-f985ee343ab6" (UID: "b3f54656-993f-4526-b27f-f985ee343ab6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 00:44:41 crc kubenswrapper[4805]: I1203 00:44:41.288961 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mmb7\" (UniqueName: \"kubernetes.io/projected/b3f54656-993f-4526-b27f-f985ee343ab6-kube-api-access-5mmb7\") on node \"crc\" DevicePath \"\"" Dec 03 00:44:41 crc kubenswrapper[4805]: I1203 00:44:41.289007 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3f54656-993f-4526-b27f-f985ee343ab6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 00:44:41 crc kubenswrapper[4805]: I1203 00:44:41.289025 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3f54656-993f-4526-b27f-f985ee343ab6-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 00:44:41 crc kubenswrapper[4805]: I1203 00:44:41.498050 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snp75" event={"ID":"b3f54656-993f-4526-b27f-f985ee343ab6","Type":"ContainerDied","Data":"5fe1b031a793566fa8418ad6de136033f736c0b8f6ca342185583f2cff4fe76b"} Dec 03 00:44:41 crc kubenswrapper[4805]: I1203 00:44:41.498180 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snp75" Dec 03 00:44:41 crc kubenswrapper[4805]: I1203 00:44:41.498254 4805 scope.go:117] "RemoveContainer" containerID="abd19131c05992fe2b5ca532bf846675eddbac2352226b6768c4be8d9386a315" Dec 03 00:44:41 crc kubenswrapper[4805]: I1203 00:44:41.520704 4805 scope.go:117] "RemoveContainer" containerID="63eb4b29d46662ccf600e3dbd17b31980e738eb35f35f4b02795f83a3bbe3c91" Dec 03 00:44:41 crc kubenswrapper[4805]: I1203 00:44:41.559299 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-snp75"] Dec 03 00:44:41 crc kubenswrapper[4805]: I1203 00:44:41.562736 4805 scope.go:117] "RemoveContainer" containerID="52fdd662ad849ef6819cbe87cd73016eba83f45e604c7dc99c48fe9a3ad69e96" Dec 03 00:44:41 crc kubenswrapper[4805]: I1203 00:44:41.574970 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-snp75"] Dec 03 00:44:42 crc kubenswrapper[4805]: I1203 00:44:42.435808 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3f54656-993f-4526-b27f-f985ee343ab6" path="/var/lib/kubelet/pods/b3f54656-993f-4526-b27f-f985ee343ab6/volumes" Dec 03 00:44:54 crc kubenswrapper[4805]: I1203 00:44:54.050622 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-operators-xxjsj"] Dec 03 00:44:54 crc kubenswrapper[4805]: E1203 00:44:54.056703 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f54656-993f-4526-b27f-f985ee343ab6" containerName="extract-utilities" Dec 03 00:44:54 crc kubenswrapper[4805]: I1203 00:44:54.056776 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f54656-993f-4526-b27f-f985ee343ab6" containerName="extract-utilities" Dec 03 00:44:54 crc kubenswrapper[4805]: E1203 00:44:54.056831 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f54656-993f-4526-b27f-f985ee343ab6" containerName="registry-server" Dec 03 00:44:54 crc kubenswrapper[4805]: I1203 00:44:54.056847 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f54656-993f-4526-b27f-f985ee343ab6" containerName="registry-server" Dec 03 00:44:54 crc kubenswrapper[4805]: E1203 00:44:54.056880 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f54656-993f-4526-b27f-f985ee343ab6" containerName="extract-content" Dec 03 00:44:54 crc kubenswrapper[4805]: I1203 00:44:54.056894 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f54656-993f-4526-b27f-f985ee343ab6" containerName="extract-content" Dec 03 00:44:54 crc kubenswrapper[4805]: I1203 00:44:54.063109 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f54656-993f-4526-b27f-f985ee343ab6" containerName="registry-server" Dec 03 00:44:54 crc kubenswrapper[4805]: I1203 00:44:54.066757 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-xxjsj" Dec 03 00:44:54 crc kubenswrapper[4805]: I1203 00:44:54.103524 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-xxjsj"] Dec 03 00:44:54 crc kubenswrapper[4805]: I1203 00:44:54.127688 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2s55\" (UniqueName: \"kubernetes.io/projected/04625c15-30a5-45d3-8623-efcff3f91bf7-kube-api-access-q2s55\") pod \"service-telemetry-framework-operators-xxjsj\" (UID: \"04625c15-30a5-45d3-8623-efcff3f91bf7\") " pod="service-telemetry/service-telemetry-framework-operators-xxjsj" Dec 03 00:44:54 crc kubenswrapper[4805]: I1203 00:44:54.229609 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2s55\" (UniqueName: \"kubernetes.io/projected/04625c15-30a5-45d3-8623-efcff3f91bf7-kube-api-access-q2s55\") pod \"service-telemetry-framework-operators-xxjsj\" (UID: \"04625c15-30a5-45d3-8623-efcff3f91bf7\") " pod="service-telemetry/service-telemetry-framework-operators-xxjsj" Dec 03 00:44:54 crc kubenswrapper[4805]: I1203 00:44:54.263602 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2s55\" (UniqueName: \"kubernetes.io/projected/04625c15-30a5-45d3-8623-efcff3f91bf7-kube-api-access-q2s55\") pod \"service-telemetry-framework-operators-xxjsj\" (UID: \"04625c15-30a5-45d3-8623-efcff3f91bf7\") " pod="service-telemetry/service-telemetry-framework-operators-xxjsj" Dec 03 00:44:54 crc kubenswrapper[4805]: I1203 00:44:54.434356 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-xxjsj" Dec 03 00:44:54 crc kubenswrapper[4805]: I1203 00:44:54.893081 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-xxjsj"] Dec 03 00:44:55 crc kubenswrapper[4805]: I1203 00:44:55.664728 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-xxjsj" event={"ID":"04625c15-30a5-45d3-8623-efcff3f91bf7","Type":"ContainerStarted","Data":"8d882f60f6609fb3b7d67ee81588d4c1daa55652d374e31789fede910f424c79"} Dec 03 00:44:55 crc kubenswrapper[4805]: I1203 00:44:55.665149 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-xxjsj" event={"ID":"04625c15-30a5-45d3-8623-efcff3f91bf7","Type":"ContainerStarted","Data":"ffbc46270c65a6e3a70b8c8dc9f45f2adce02a87eb0f624fc9ae26f43e9a3cac"} Dec 03 00:44:55 crc kubenswrapper[4805]: I1203 00:44:55.692652 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-operators-xxjsj" podStartSLOduration=1.528952765 podStartE2EDuration="1.692619783s" podCreationTimestamp="2025-12-03 00:44:54 +0000 UTC" firstStartedPulling="2025-12-03 00:44:54.907674681 +0000 UTC m=+2318.756637327" lastFinishedPulling="2025-12-03 00:44:55.071341739 +0000 UTC m=+2318.920304345" observedRunningTime="2025-12-03 00:44:55.685987364 +0000 UTC m=+2319.534949970" watchObservedRunningTime="2025-12-03 00:44:55.692619783 +0000 UTC m=+2319.541582399" Dec 03 00:45:00 crc kubenswrapper[4805]: I1203 00:45:00.156612 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412045-6v4nk"] Dec 03 00:45:00 crc kubenswrapper[4805]: I1203 00:45:00.158669 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-6v4nk" Dec 03 00:45:00 crc kubenswrapper[4805]: I1203 00:45:00.162005 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 00:45:00 crc kubenswrapper[4805]: I1203 00:45:00.162148 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 00:45:00 crc kubenswrapper[4805]: I1203 00:45:00.180082 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412045-6v4nk"] Dec 03 00:45:00 crc kubenswrapper[4805]: I1203 00:45:00.237694 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c2f3192-9427-4ae5-a387-3dc8eb0f3178-secret-volume\") pod \"collect-profiles-29412045-6v4nk\" (UID: \"5c2f3192-9427-4ae5-a387-3dc8eb0f3178\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-6v4nk" Dec 03 00:45:00 crc kubenswrapper[4805]: I1203 00:45:00.237776 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fqmb\" (UniqueName: \"kubernetes.io/projected/5c2f3192-9427-4ae5-a387-3dc8eb0f3178-kube-api-access-2fqmb\") pod \"collect-profiles-29412045-6v4nk\" (UID: \"5c2f3192-9427-4ae5-a387-3dc8eb0f3178\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-6v4nk" Dec 03 00:45:00 crc kubenswrapper[4805]: I1203 00:45:00.237821 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c2f3192-9427-4ae5-a387-3dc8eb0f3178-config-volume\") pod \"collect-profiles-29412045-6v4nk\" (UID: \"5c2f3192-9427-4ae5-a387-3dc8eb0f3178\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-6v4nk" Dec 03 00:45:00 crc kubenswrapper[4805]: I1203 00:45:00.339546 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c2f3192-9427-4ae5-a387-3dc8eb0f3178-secret-volume\") pod \"collect-profiles-29412045-6v4nk\" (UID: \"5c2f3192-9427-4ae5-a387-3dc8eb0f3178\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-6v4nk" Dec 03 00:45:00 crc kubenswrapper[4805]: I1203 00:45:00.339605 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fqmb\" (UniqueName: \"kubernetes.io/projected/5c2f3192-9427-4ae5-a387-3dc8eb0f3178-kube-api-access-2fqmb\") pod \"collect-profiles-29412045-6v4nk\" (UID: \"5c2f3192-9427-4ae5-a387-3dc8eb0f3178\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-6v4nk" Dec 03 00:45:00 crc kubenswrapper[4805]: I1203 00:45:00.339635 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c2f3192-9427-4ae5-a387-3dc8eb0f3178-config-volume\") pod \"collect-profiles-29412045-6v4nk\" (UID: \"5c2f3192-9427-4ae5-a387-3dc8eb0f3178\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-6v4nk" Dec 03 00:45:00 crc kubenswrapper[4805]: I1203 00:45:00.341056 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c2f3192-9427-4ae5-a387-3dc8eb0f3178-config-volume\") pod \"collect-profiles-29412045-6v4nk\" (UID: \"5c2f3192-9427-4ae5-a387-3dc8eb0f3178\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-6v4nk" Dec 03 00:45:00 crc kubenswrapper[4805]: I1203 00:45:00.352162 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c2f3192-9427-4ae5-a387-3dc8eb0f3178-secret-volume\") pod \"collect-profiles-29412045-6v4nk\" (UID: \"5c2f3192-9427-4ae5-a387-3dc8eb0f3178\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-6v4nk" Dec 03 00:45:00 crc kubenswrapper[4805]: I1203 00:45:00.359372 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fqmb\" (UniqueName: \"kubernetes.io/projected/5c2f3192-9427-4ae5-a387-3dc8eb0f3178-kube-api-access-2fqmb\") pod \"collect-profiles-29412045-6v4nk\" (UID: \"5c2f3192-9427-4ae5-a387-3dc8eb0f3178\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-6v4nk" Dec 03 00:45:00 crc kubenswrapper[4805]: I1203 00:45:00.479938 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-6v4nk" Dec 03 00:45:00 crc kubenswrapper[4805]: I1203 00:45:00.985005 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412045-6v4nk"] Dec 03 00:45:01 crc kubenswrapper[4805]: I1203 00:45:01.721885 4805 generic.go:334] "Generic (PLEG): container finished" podID="5c2f3192-9427-4ae5-a387-3dc8eb0f3178" containerID="fc6748a309fb4ec132b6ff077fc909c100bcefa5b796324c375bdcd7bd4c7d66" exitCode=0 Dec 03 00:45:01 crc kubenswrapper[4805]: I1203 00:45:01.721934 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-6v4nk" event={"ID":"5c2f3192-9427-4ae5-a387-3dc8eb0f3178","Type":"ContainerDied","Data":"fc6748a309fb4ec132b6ff077fc909c100bcefa5b796324c375bdcd7bd4c7d66"} Dec 03 00:45:01 crc kubenswrapper[4805]: I1203 00:45:01.721998 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-6v4nk" event={"ID":"5c2f3192-9427-4ae5-a387-3dc8eb0f3178","Type":"ContainerStarted","Data":"4fb43e9ff904b28c9079e8bd4107c54bed85e1e34013bde34a61c293ec1c462e"} Dec 03 00:45:03 crc kubenswrapper[4805]: I1203 00:45:03.075049 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-6v4nk" Dec 03 00:45:03 crc kubenswrapper[4805]: I1203 00:45:03.194953 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c2f3192-9427-4ae5-a387-3dc8eb0f3178-config-volume\") pod \"5c2f3192-9427-4ae5-a387-3dc8eb0f3178\" (UID: \"5c2f3192-9427-4ae5-a387-3dc8eb0f3178\") " Dec 03 00:45:03 crc kubenswrapper[4805]: I1203 00:45:03.195066 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fqmb\" (UniqueName: \"kubernetes.io/projected/5c2f3192-9427-4ae5-a387-3dc8eb0f3178-kube-api-access-2fqmb\") pod \"5c2f3192-9427-4ae5-a387-3dc8eb0f3178\" (UID: \"5c2f3192-9427-4ae5-a387-3dc8eb0f3178\") " Dec 03 00:45:03 crc kubenswrapper[4805]: I1203 00:45:03.195156 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c2f3192-9427-4ae5-a387-3dc8eb0f3178-secret-volume\") pod \"5c2f3192-9427-4ae5-a387-3dc8eb0f3178\" (UID: \"5c2f3192-9427-4ae5-a387-3dc8eb0f3178\") " Dec 03 00:45:03 crc kubenswrapper[4805]: I1203 00:45:03.196489 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c2f3192-9427-4ae5-a387-3dc8eb0f3178-config-volume" (OuterVolumeSpecName: "config-volume") pod "5c2f3192-9427-4ae5-a387-3dc8eb0f3178" (UID: "5c2f3192-9427-4ae5-a387-3dc8eb0f3178"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 00:45:03 crc kubenswrapper[4805]: I1203 00:45:03.202772 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c2f3192-9427-4ae5-a387-3dc8eb0f3178-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5c2f3192-9427-4ae5-a387-3dc8eb0f3178" (UID: "5c2f3192-9427-4ae5-a387-3dc8eb0f3178"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 00:45:03 crc kubenswrapper[4805]: I1203 00:45:03.203550 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c2f3192-9427-4ae5-a387-3dc8eb0f3178-kube-api-access-2fqmb" (OuterVolumeSpecName: "kube-api-access-2fqmb") pod "5c2f3192-9427-4ae5-a387-3dc8eb0f3178" (UID: "5c2f3192-9427-4ae5-a387-3dc8eb0f3178"). InnerVolumeSpecName "kube-api-access-2fqmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:45:03 crc kubenswrapper[4805]: I1203 00:45:03.298138 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c2f3192-9427-4ae5-a387-3dc8eb0f3178-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:45:03 crc kubenswrapper[4805]: I1203 00:45:03.298180 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c2f3192-9427-4ae5-a387-3dc8eb0f3178-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 00:45:03 crc kubenswrapper[4805]: I1203 00:45:03.298206 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fqmb\" (UniqueName: \"kubernetes.io/projected/5c2f3192-9427-4ae5-a387-3dc8eb0f3178-kube-api-access-2fqmb\") on node \"crc\" DevicePath \"\"" Dec 03 00:45:03 crc kubenswrapper[4805]: I1203 00:45:03.743290 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-6v4nk" Dec 03 00:45:03 crc kubenswrapper[4805]: I1203 00:45:03.743152 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412045-6v4nk" event={"ID":"5c2f3192-9427-4ae5-a387-3dc8eb0f3178","Type":"ContainerDied","Data":"4fb43e9ff904b28c9079e8bd4107c54bed85e1e34013bde34a61c293ec1c462e"} Dec 03 00:45:03 crc kubenswrapper[4805]: I1203 00:45:03.743748 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fb43e9ff904b28c9079e8bd4107c54bed85e1e34013bde34a61c293ec1c462e" Dec 03 00:45:04 crc kubenswrapper[4805]: I1203 00:45:04.167259 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq"] Dec 03 00:45:04 crc kubenswrapper[4805]: I1203 00:45:04.175556 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412000-64tqq"] Dec 03 00:45:04 crc kubenswrapper[4805]: I1203 00:45:04.432497 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd1a9d56-21a3-450e-b9af-fc132ee10466" path="/var/lib/kubelet/pods/fd1a9d56-21a3-450e-b9af-fc132ee10466/volumes" Dec 03 00:45:04 crc kubenswrapper[4805]: I1203 00:45:04.435439 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/service-telemetry-framework-operators-xxjsj" Dec 03 00:45:04 crc kubenswrapper[4805]: I1203 00:45:04.436343 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/service-telemetry-framework-operators-xxjsj" Dec 03 00:45:04 crc kubenswrapper[4805]: I1203 00:45:04.478713 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/service-telemetry-framework-operators-xxjsj" Dec 03 00:45:04 crc kubenswrapper[4805]: I1203 00:45:04.784895 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/service-telemetry-framework-operators-xxjsj" Dec 03 00:45:04 crc kubenswrapper[4805]: I1203 00:45:04.837167 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-xxjsj"] Dec 03 00:45:06 crc kubenswrapper[4805]: I1203 00:45:06.785294 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-operators-xxjsj" podUID="04625c15-30a5-45d3-8623-efcff3f91bf7" containerName="registry-server" containerID="cri-o://8d882f60f6609fb3b7d67ee81588d4c1daa55652d374e31789fede910f424c79" gracePeriod=2 Dec 03 00:45:07 crc kubenswrapper[4805]: I1203 00:45:07.230584 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-xxjsj" Dec 03 00:45:07 crc kubenswrapper[4805]: I1203 00:45:07.271729 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2s55\" (UniqueName: \"kubernetes.io/projected/04625c15-30a5-45d3-8623-efcff3f91bf7-kube-api-access-q2s55\") pod \"04625c15-30a5-45d3-8623-efcff3f91bf7\" (UID: \"04625c15-30a5-45d3-8623-efcff3f91bf7\") " Dec 03 00:45:07 crc kubenswrapper[4805]: I1203 00:45:07.283239 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04625c15-30a5-45d3-8623-efcff3f91bf7-kube-api-access-q2s55" (OuterVolumeSpecName: "kube-api-access-q2s55") pod "04625c15-30a5-45d3-8623-efcff3f91bf7" (UID: "04625c15-30a5-45d3-8623-efcff3f91bf7"). InnerVolumeSpecName "kube-api-access-q2s55". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 00:45:07 crc kubenswrapper[4805]: I1203 00:45:07.373700 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2s55\" (UniqueName: \"kubernetes.io/projected/04625c15-30a5-45d3-8623-efcff3f91bf7-kube-api-access-q2s55\") on node \"crc\" DevicePath \"\"" Dec 03 00:45:07 crc kubenswrapper[4805]: I1203 00:45:07.794415 4805 generic.go:334] "Generic (PLEG): container finished" podID="04625c15-30a5-45d3-8623-efcff3f91bf7" containerID="8d882f60f6609fb3b7d67ee81588d4c1daa55652d374e31789fede910f424c79" exitCode=0 Dec 03 00:45:07 crc kubenswrapper[4805]: I1203 00:45:07.794490 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-operators-xxjsj" Dec 03 00:45:07 crc kubenswrapper[4805]: I1203 00:45:07.794504 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-xxjsj" event={"ID":"04625c15-30a5-45d3-8623-efcff3f91bf7","Type":"ContainerDied","Data":"8d882f60f6609fb3b7d67ee81588d4c1daa55652d374e31789fede910f424c79"} Dec 03 00:45:07 crc kubenswrapper[4805]: I1203 00:45:07.794611 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-operators-xxjsj" event={"ID":"04625c15-30a5-45d3-8623-efcff3f91bf7","Type":"ContainerDied","Data":"ffbc46270c65a6e3a70b8c8dc9f45f2adce02a87eb0f624fc9ae26f43e9a3cac"} Dec 03 00:45:07 crc kubenswrapper[4805]: I1203 00:45:07.794635 4805 scope.go:117] "RemoveContainer" containerID="8d882f60f6609fb3b7d67ee81588d4c1daa55652d374e31789fede910f424c79" Dec 03 00:45:07 crc kubenswrapper[4805]: I1203 00:45:07.814405 4805 scope.go:117] "RemoveContainer" containerID="8d882f60f6609fb3b7d67ee81588d4c1daa55652d374e31789fede910f424c79" Dec 03 00:45:07 crc kubenswrapper[4805]: E1203 00:45:07.814948 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d882f60f6609fb3b7d67ee81588d4c1daa55652d374e31789fede910f424c79\": container with ID starting with 8d882f60f6609fb3b7d67ee81588d4c1daa55652d374e31789fede910f424c79 not found: ID does not exist" containerID="8d882f60f6609fb3b7d67ee81588d4c1daa55652d374e31789fede910f424c79" Dec 03 00:45:07 crc kubenswrapper[4805]: I1203 00:45:07.814997 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d882f60f6609fb3b7d67ee81588d4c1daa55652d374e31789fede910f424c79"} err="failed to get container status \"8d882f60f6609fb3b7d67ee81588d4c1daa55652d374e31789fede910f424c79\": rpc error: code = NotFound desc = could not find container \"8d882f60f6609fb3b7d67ee81588d4c1daa55652d374e31789fede910f424c79\": container with ID starting with 8d882f60f6609fb3b7d67ee81588d4c1daa55652d374e31789fede910f424c79 not found: ID does not exist" Dec 03 00:45:07 crc kubenswrapper[4805]: I1203 00:45:07.839653 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-xxjsj"] Dec 03 00:45:07 crc kubenswrapper[4805]: I1203 00:45:07.846794 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-operators-xxjsj"] Dec 03 00:45:08 crc kubenswrapper[4805]: I1203 00:45:08.431992 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04625c15-30a5-45d3-8623-efcff3f91bf7" path="/var/lib/kubelet/pods/04625c15-30a5-45d3-8623-efcff3f91bf7/volumes" Dec 03 00:45:17 crc kubenswrapper[4805]: I1203 00:45:17.812314 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:45:17 crc kubenswrapper[4805]: I1203 00:45:17.813390 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:45:35 crc kubenswrapper[4805]: I1203 00:45:35.793426 4805 scope.go:117] "RemoveContainer" containerID="268ac17880cd826d20dfae46c476da988e604b8fd722445aebac63eb4a9655f8" Dec 03 00:45:47 crc kubenswrapper[4805]: I1203 00:45:47.811858 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:45:47 crc kubenswrapper[4805]: I1203 00:45:47.812892 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:46:17 crc kubenswrapper[4805]: I1203 00:46:17.812031 4805 patch_prober.go:28] interesting pod/machine-config-daemon-dd5rs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 00:46:17 crc kubenswrapper[4805]: I1203 00:46:17.813078 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 00:46:17 crc kubenswrapper[4805]: I1203 00:46:17.813164 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" Dec 03 00:46:17 crc kubenswrapper[4805]: I1203 00:46:17.814427 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5cd6fb35011f6d67f374842eb8e53dbe4677b80fc7df1958df14c2b165c73b30"} pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 00:46:17 crc kubenswrapper[4805]: I1203 00:46:17.814555 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerName="machine-config-daemon" containerID="cri-o://5cd6fb35011f6d67f374842eb8e53dbe4677b80fc7df1958df14c2b165c73b30" gracePeriod=600 Dec 03 00:46:17 crc kubenswrapper[4805]: E1203 00:46:17.946837 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" Dec 03 00:46:18 crc kubenswrapper[4805]: I1203 00:46:18.566333 4805 generic.go:334] "Generic (PLEG): container finished" podID="42d6da4d-d781-4243-b5c3-28a8cf91ef53" containerID="5cd6fb35011f6d67f374842eb8e53dbe4677b80fc7df1958df14c2b165c73b30" exitCode=0 Dec 03 00:46:18 crc kubenswrapper[4805]: I1203 00:46:18.566419 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" event={"ID":"42d6da4d-d781-4243-b5c3-28a8cf91ef53","Type":"ContainerDied","Data":"5cd6fb35011f6d67f374842eb8e53dbe4677b80fc7df1958df14c2b165c73b30"} Dec 03 00:46:18 crc kubenswrapper[4805]: I1203 00:46:18.566497 4805 scope.go:117] "RemoveContainer" containerID="5a3ac1d228024bb4737ad8fb9c1bcabee904b7ab6322982ce04c396481575c75" Dec 03 00:46:18 crc kubenswrapper[4805]: I1203 00:46:18.567428 4805 scope.go:117] "RemoveContainer" containerID="5cd6fb35011f6d67f374842eb8e53dbe4677b80fc7df1958df14c2b165c73b30" Dec 03 00:46:18 crc kubenswrapper[4805]: E1203 00:46:18.567873 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dd5rs_openshift-machine-config-operator(42d6da4d-d781-4243-b5c3-28a8cf91ef53)\"" pod="openshift-machine-config-operator/machine-config-daemon-dd5rs" podUID="42d6da4d-d781-4243-b5c3-28a8cf91ef53"